Sample records for proposed method dramatically

  1. Conjugate gradient method for phase retrieval based on the Wirtinger derivative.

    PubMed

    Wei, Zhun; Chen, Wen; Qiu, Cheng-Wei; Chen, Xudong

    2017-05-01

    A conjugate gradient Wirtinger flow (CG-WF) algorithm for phase retrieval is proposed in this paper. It is shown that, compared with recently reported Wirtinger flow and its modified methods, the proposed CG-WF algorithm is able to dramatically accelerate the convergence rate while keeping the dominant computational cost of each iteration unchanged. We numerically illustrate the effectiveness of our method in recovering 1D Gaussian signals and 2D natural color images under both Gaussian and coded diffraction pattern models.

  2. Multibin long-range correlations

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Zalewski, K.

    2011-06-01

    A new method to study the long-range correlations in multiparticle production is developed. It is proposed to measure the joint factorial moments or cumulants of multiplicity distribution in several (more than two) bins. It is shown that this step dramatically increases the discriminative power of data.

  3. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  4. The Dramatic Methods of Hans van Dam.

    ERIC Educational Resources Information Center

    van de Water, Manon

    1994-01-01

    Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)

  5. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  6. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  7. Economic Recovery vs. Defense Spending.

    ERIC Educational Resources Information Center

    De Grasse, Robert; Murphy, Paul

    1981-01-01

    Evaluates President Reagan's proposed military buildup in light of the cuts such expenditures would necessitate in approximately 300 domestic programs. Suggests that the dramatic proposed increase in military spending risks higher inflation and slower economic growth. Concludes with a plea for rethinking of Reagan's dramatic shift in national…

  8. Patient-dependent count-rate adaptive normalization for PET detector efficiency with delayed-window coincidence events

    NASA Astrophysics Data System (ADS)

    Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli

    2015-07-01

    Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.

  9. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  10. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  11. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  12. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  13. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  14. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  15. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  16. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  17. Self Assembled Dipole Monolayers on CNTs: Effect on Transport and Charge Collection

    NASA Astrophysics Data System (ADS)

    Cook, Alexander; Lee, Bumsu; Kuznetsov, Alexander; Podzorov, Vitaly; Zakhidov, Anvar

    2010-03-01

    We propose a method of quickly and dramatically increasing the conductivity of carbon nanotubes via growth of a self assembled monolayer (SAM) of fluoroalkyl trichlorosilane dipoles following the method demonstrated with organic semiconductors in [1,2]. Growth of a SAM on carbon nanotubes results in a strong p-type doping which improves the conductivity by a factor of two or more. Additionally, this doping is nonvolatile and persists in high vacuum and inert atmospheres. Improvements to conductivity are most dramatic in the case of predominantly semi-conducting, single walled carbon nanotubes (SWCNT) due to the remarkable introduction of about 1.2e14 holes/sq. cm, but this method is also an effective means to improve metallic, multi-walled carbon nanotubes (MWCNT). We will demonstrate improvement of transport and charge collection properties of both SWCNTs and MWCNTs by these SAM coatings in FETs and also in organic photovoltaic solar cells and in OLEDs. [1] M. F. Calhoun et al. Nature Materials 7, 84 - 89 (2008). [2] C. Y. Kao et al. Adv. Func. Mater. 19, 1 (2009).

  18. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  19. Effects of random tooth profile errors on the dynamic behaviors of planetary gears

    NASA Astrophysics Data System (ADS)

    Xun, Chao; Long, Xinhua; Hua, Hongxing

    2018-02-01

    In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.

  20. Relativistic Acceleration of Electrons Injected by a Plasma Mirror into a Radially Polarized Laser Beam.

    PubMed

    Zaïm, N; Thévenet, M; Lifschitz, A; Faure, J

    2017-09-01

    We propose a method to generate femtosecond, relativistic, and high-charge electron bunches using few-cycle and tightly focused radially polarized laser pulses. In this scheme, the incident laser pulse reflects off an overdense plasma that injects electrons into the reflected pulse. Particle-in-cell simulations show that the plasma injects electrons ideally, resulting in a dramatic increase of charge and energy of the accelerated electron bunch in comparison to previous methods. This method can be used to generate femtosecond pC bunches with energies in the 1-10 MeV range using realistic laser parameters corresponding to current kHz laser systems.

  1. Efficient generation of holographic news ticker in holographic 3DTV

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-08-01

    News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.

  2. A signal-based fault detection and classification method for heavy haul wagons

    NASA Astrophysics Data System (ADS)

    Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan

    2017-12-01

    This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.

  3. Enhancement of IVR images by combining an ICA shrinkage filter with a multi-scale filter

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Wei; Matsuo, Kiyotaka; Han, Xianhua; Shimizu, Atsumoto; Shibata, Koichi; Mishina, Yukio; Mukuta, Yoshihiro

    2007-11-01

    Interventional Radiology (IVR) is an important technique to visualize and diagnosis the vascular disease. In real medical application, a weak x-ray radiation source is used for imaging in order to reduce the radiation dose, resulting in a low contrast noisy image. It is important to develop a method to smooth out the noise while enhance the vascular structure. In this paper, we propose to combine an ICA Shrinkage filter with a multiscale filter for enhancement of IVR images. The ICA shrinkage filter is used for noise reduction and the multiscale filter is used for enhancement of vascular structure. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed method. Simultaneous noise reduction and vessel enhancement have been achieved.

  4. Efficient generation of 3D hologram for American Sign Language using look-up table

    NASA Astrophysics Data System (ADS)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  5. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  6. Fast exposure time decision in multi-exposure HDR imaging

    NASA Astrophysics Data System (ADS)

    Piao, Yongjie; Jin, Guang

    2012-10-01

    Currently available imaging and display system exists the problem of insufficient dynamic range, and the system cannot restore all the information for an high dynamic range (HDR) scene. The number of low dynamic range(LDR) image samples and fastness of exposure time decision impacts the real-time performance of the system dramatically. In order to realize a real-time HDR video acquisition system, this paper proposed a fast and robust method for exposure time selection in under and over exposure area which is based on system response function. The method utilized the monotony of the imaging system. According to this characteristic the exposure time is adjusted to an initial value to make the median value of the image equals to the middle value of the system output range; then adjust the exposure time to make the pixel value on two sides of histogram be the middle value of the system output range. Thus three low dynamic range images are acquired. Experiments show that the proposed method for adjusting the initial exposure time can converge in two iterations which is more fast and stable than average gray control method. As to the exposure time adjusting in under and over exposed area, the proposed method can use the dynamic range of the system more efficiently than fixed exposure time method.

  7. Robust EM Continual Reassessment Method in Oncology Dose Finding

    PubMed Central

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  8. Advances in mixed-integer programming methods for chemical production scheduling.

    PubMed

    Velez, Sara; Maravelias, Christos T

    2014-01-01

    The goal of this paper is to critically review advances in the area of chemical production scheduling over the past three decades and then present two recently proposed solution methods that have led to dramatic computational enhancements. First, we present a general framework and problem classification and discuss modeling and solution methods with an emphasis on mixed-integer programming (MIP) techniques. Second, we present two solution methods: (a) a constraint propagation algorithm that allows us to compute parameters that are then used to tighten MIP scheduling models and (b) a reformulation that introduces new variables, thus leading to effective branching. We also present computational results and an example illustrating how these methods are implemented, as well as the resulting enhancements. We close with a discussion of open research challenges and future research directions.

  9. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems.

    PubMed

    Lyu, Weiwei; Cheng, Xianghong

    2017-11-28

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.

  10. Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks

    NASA Astrophysics Data System (ADS)

    Din, Der-Rong; Huang, Jen-Shen

    2014-03-01

    As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.

  11. Stochastic Spectral Descent for Discrete Graphical Models

    DOE PAGES

    Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...

    2015-12-14

    Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less

  12. On Dramatic Instruction: Towards a Taxonomy of Methods.

    ERIC Educational Resources Information Center

    Courtney, Richard

    1987-01-01

    Examines the many possible methods used by instructors who work with dramatic action: in educational drama, drama therapy, social drama, and theater. Discusses an emergent taxonomy whereby instructors choose either spontaneous/formal, overt/covert/, or intrinsic/extrinsic methods. (JC)

  13. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  14. Enriched Imperialist Competitive Algorithm for system identification of magneto-rheological dampers

    NASA Astrophysics Data System (ADS)

    Talatahari, Siamak; Rahbari, Nima Mohajer

    2015-10-01

    In the current research, the imperialist competitive algorithm is dramatically enhanced and a new optimization method dubbed as Enriched Imperialist Competitive Algorithm (EICA) is effectively introduced to deal with high non-linear optimization problems. To conduct a close examination of its functionality and efficacy, the proposed metaheuristic optimization approach is actively employed to sort out the parameter identification of two different types of hysteretic Bouc-Wen models which are simulating the non-linear behavior of MR dampers. Two types of experimental data are used for the optimization problems to minutely examine the robustness of the proposed EICA. The obtained results self-evidently demonstrate the high adaptability of EICA to suitably get to the bottom of such non-linear and hysteretic problems.

  15. Fast depth decision for HEVC inter prediction based on spatial and temporal correlation

    NASA Astrophysics Data System (ADS)

    Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi

    2016-07-01

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.

  16. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.

    PubMed

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-06-01

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Determination method for nitromethane in workplace air.

    PubMed

    Takeuchi, Akito; Nishimura, Yasuki; Kaifuku, Yuichiro; Imanaka, Tsutoshi; Natsumeda, Shuichiro; Ota, Hirokazu; Yamada, Shu; Kurotani, Ichiro; Sumino, Kimiaki; Kanno, Seiichiro

    2010-01-01

    The purpose of this research was to develop a determination method for nitromethane (NM) in workplace air for risk assessment. A suitable sampler and appropriate desorption condition were selected by a recovery test in which a spiked sampler was used. The characteristics of the proposed method, such as recovery, detection limit, and reproducibility, and the storage stability of the sample were examined. A sampling tube containing bead-shaped activated carbon was chosen as the sampler. NM in the sampler was desorbed with acetone and analyzed by a gas chromatograph equipped with a flame ionization detector. The recoveries of NM from the spiked sampler were 81-97% and 80-98% for personal exposure monitoring and working environment measurement, respectively. On the first day of storage in a refrigerator, the recovery from the spiked samplers exceeded 90%; however, it decreased dramatically with increasing storage time. In particular, the decrease was more remarkable for the smaller spiked amounts. The overall LOQ was 2 microg/sample. The relative standard deviation, which represents the overall reproducibility, was 1.1-4.0%. The proposed method enables 4-hour personal exposure monitoring of NM at concentrations equaling 0.001-2 times the threshold limit value-time-weighted average (TLV-TWA: 20 ppm) proposed by the American Conference of Governmental Industrial Hygienists, as well as 10-minute working environment measurement at concentrations equaling 0.02-2 times TLV-TWA. Thus, the proposed method will be useful for estimating worker exposure to NM.

  18. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  19. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  20. High-Fidelity Roadway Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  1. Achieving highly efficient and broad-angle polarization beam filtering using epsilon-near-zero metamaterials mimicked by metal-dielectric multilayers

    NASA Astrophysics Data System (ADS)

    Wu, Feng

    2018-03-01

    We report a highly efficient and broad-angle polarization beam filter at visible wavelengths using an anisotropic epsilon-near-zero metamaterial mimicked by a multilayer composed of alternative subwavelength magnesium fluoride and silver layers. The underlying physics can be explained by the dramatic difference between two orthogonal polarizations' iso-frequency curves of anisotropic epsilon-near-zero metamaterials. Transmittance for two orthogonal polarization waves and the polarization extinction ratio are calculated via the transfer matrix method to assess the comprehensive performance of the proposed polarization beam filter. From the simulation results, the proposed polarization beam filter is highly efficient (the polarization extinction ratio is far larger than two orders of magnitude) and has a broad operating angle range (ranging from 30° to 75°). Finally, we show that the proper tailoring of the periodic number enables us to obtain high comprehensive performance of the proposed polarization beam filter.

  2. Engaging in Dramatic Activities in English as a Foreign Language Classes at the University Level

    ERIC Educational Resources Information Center

    Algarra Carrasco, Victoria

    2012-01-01

    In this article, we discuss how, through dramatic activities, fiction and reality can work together to help the English as a Foreign language learner communicate in a more personal and meaningful way. The kind of activities proposed are designed to help engender a space where students can personally engage with each other in an atmosphere that is…

  3. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems

    PubMed Central

    Lyu, Weiwei

    2017-01-01

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592

  4. MCTDH on-the-fly: Efficient grid-based quantum dynamics without pre-computed potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Richings, Gareth W.; Habershon, Scott

    2018-04-01

    We present significant algorithmic improvements to a recently proposed direct quantum dynamics method, based upon combining well established grid-based quantum dynamics approaches and expansions of the potential energy operator in terms of a weighted sum of Gaussian functions. Specifically, using a sum of low-dimensional Gaussian functions to represent the potential energy surface (PES), combined with a secondary fitting of the PES using singular value decomposition, we show how standard grid-based quantum dynamics methods can be dramatically accelerated without loss of accuracy. This is demonstrated by on-the-fly simulations (using both standard grid-based methods and multi-configuration time-dependent Hartree) of both proton transfer on the electronic ground state of salicylaldimine and the non-adiabatic dynamics of pyrazine.

  5. Marshall Barber and the century of microinjection: from cloning of bacteria to cloning of everything.

    PubMed

    Korzh, Vladimir; Strähle, Uwe

    2002-08-01

    A hundred years ago, Dr. Marshall A. Barber proposed a new technique - the microinjection technique. He developed this method initially to clone bacteria and to confirm the germ theory of Koch and Pasteur. Later on, he refined his approach and was able to manipulate nuclei in protozoa and to implant bacteria into plant cells. Continuous improvement and adaptation of this method to new applications dramatically changed experimental embryology and cytology and led to the formation of several new scientific disciplines including animal cloning as one of its latest applications. Interestingly, microinjection originated as a method at the crossroad of bacteriology and plant biology, demonstrating once again the unforeseen impact that basic research in an unrelated field can have on the development of entirely different disciplines.

  6. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  7. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  8. Evaluating the Performance of the IEEE Standard 1366 Method for Identifying Major Event Days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Sohn, Michael D.

    IEEE Standard 1366 offers a method for segmenting reliability performance data to isolate the effects of major events from the underlying year-to-year trends in reliability. Recent analysis by the IEEE Distribution Reliability Working Group (DRWG) has found that reliability performance of some utilities differs from the expectations that helped guide the development of the Standard 1366 method. This paper proposes quantitative metrics to evaluate the performance of the Standard 1366 method in identifying major events and in reducing year-to-year variability in utility reliability. The metrics are applied to a large sample of utility-reported reliability data to assess performance of themore » method with alternative specifications that have been considered by the DRWG. We find that none of the alternatives perform uniformly 'better' than the current Standard 1366 method. That is, none of the modifications uniformly lowers the year-to-year variability in System Average Interruption Duration Index without major events. Instead, for any given alternative, while it may lower the value of this metric for some utilities, it also increases it for other utilities (sometimes dramatically). Thus, we illustrate some of the trade-offs that must be considered in using the Standard 1366 method and highlight the usefulness of the metrics we have proposed in conducting these evaluations.« less

  9. Real-time management of a multipurpose water reservoir with a heteroscedastic inflow model

    NASA Astrophysics Data System (ADS)

    Pianosi, F.; Soncini-Sessa, R.

    2009-10-01

    Stochastic dynamic programming has been extensively used as a method for designing optimal regulation policies for water reservoirs. However, the potential of this method is dramatically reduced by its computational burden, which often forces to introduce strong approximations in the model of the system, especially in the description of the reservoir inflow. In this paper, an approach to partially remedy this problem is proposed and applied to a real world case study. It foresees solving the management problem on-line, using a reduced model of the system and the inflow forecast provided by a dynamic model. By doing so, all the hydrometeorological information that is available in real-time is fully exploited. The model here proposed for the inflow forecasting is a nonlinear, heteroscedastic model that provides both the expected value and the standard deviation of the inflow through dynamic relations. The effectiveness of such model for the purpose of the reservoir regulation is evaluated through simulation and comparison with the results provided by conventional homoscedastic inflow models.

  10. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.

    PubMed

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-09-09

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  11. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  12. Design of fluidic self-assembly bonds for precise component positioning

    NASA Astrophysics Data System (ADS)

    Ramadoss, Vivek; Crane, Nathan B.

    2008-02-01

    Self Assembly is a promising alternative to conventional pick and place robotic assembly of micro components. Its benefits include parallel integration of parts with low equipment costs. Various approaches to self assembly have been demonstrated, yet demanding applications like assembly of micro-optical devices require increased positioning accuracy. This paper proposes a new method for design of self assembly bonds that addresses this need. Current methods have zero force at the desired assembly position and low stiffness. This allows small disturbance forces to create significant positioning errors. The proposed method uses a substrate assembly feature to provide a high accuracy alignment guide to the part. The capillary bond region of the part and substrate are then modified to create a non-zero positioning force to maintain the part in the desired assembly position. Capillary force models show that this force aligns the part to the substrate assembly feature and reduces sensitivity of part position to process variation. Thus, the new configuration can substantially improve positioning accuracy of capillary self-assembly. This will result in a dramatic decrease in positioning errors in the micro parts. Various binding site designs are analyzed and guidelines are proposed for the design of an effective assembly bond using this new approach.

  13. Significant Enhancement of MgZnO Metal-Semiconductor-Metal Photodetectors via Coupling with Pt Nanoparticle Surface Plasmons

    NASA Astrophysics Data System (ADS)

    Guo, Zexuan; Jiang, Dayong; Hu, Nan; Yang, Xiaojiang; Zhang, Wei; Duan, Yuhan; Gao, Shang; Liang, Qingcheng; Zheng, Tao; Lv, Jingwen

    2018-06-01

    We proposed and demonstrated MgZnO metal-semiconductor-metal (MSM) ultraviolet photodetectors (UV) assisted with surface plasmons (SPs) prepared by the radio frequency magnetron sputtering deposition method. After the decoration of their surface with Pt nanoparticles (NPs), the responsivity of all the electrode spacing (3, 5, and 8 μm) photodetectors were enhanced dramatically; to our surprise, comparing with them the responsivity of larger spacing sample, more SPs were gathered which are smaller than others in turn. A physical mechanism focused on SPs and depletion width is given to explain the above results.

  14. [Means and methods of acoustic protection in aviation: current status and outlook for development].

    PubMed

    Soldatov, S K; Bogomolov, A V; Zinkin, V N; Aver'ianov, A A; Rossel's, A V; Patskin, G A; Sokolov, B A

    2011-01-01

    Analysis of the current status of acoustic protection in aviation shows that despite the material progress in the field, risk of professional pathologies in flying and technical personnel is still high. The situation is dramatized by the lack of effective personal and crew acoustic protectors. The authors speculate on applicability of innovative materials and technologies, ingenious designs of earphones and modular prefabricated demountable structures. Tests of proposed personal protectors demonstrated their competitiveness with foreign analogs. Prospective lines of development, e.g. incorporation of active sound absorption systems in existing passive protectors are discussed.

  15. A logic-based method to build signaling networks and propose experimental plans.

    PubMed

    Rougny, Adrien; Gloaguen, Pauline; Langonné, Nathalie; Reiter, Eric; Crépieux, Pascale; Poupon, Anne; Froidevaux, Christine

    2018-05-18

    With the dramatic increase of the diversity and the sheer quantity of biological data generated, the construction of comprehensive signaling networks that include precise mechanisms cannot be carried out manually anymore. In this context, we propose a logic-based method that allows building large signaling networks automatically. Our method is based on a set of expert rules that make explicit the reasoning made by biologists when interpreting experimental results coming from a wide variety of experiment types. These rules allow formulating all the conclusions that can be inferred from a set of experimental results, and thus building all the possible networks that explain these results. Moreover, given an hypothesis, our system proposes experimental plans to carry out in order to validate or invalidate it. To evaluate the performance of our method, we applied our framework to the reconstruction of the FSHR-induced and the EGFR-induced signaling networks. The FSHR is known to induce the transactivation of the EGFR, but very little is known on the resulting FSH- and EGF-dependent network. We built a single network using data underlying both networks. This leads to a new hypothesis on the activation of MEK by p38MAPK, which we validate experimentally. These preliminary results represent a first step in the demonstration of a cross-talk between these two major MAP kinases pathways.

  16. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    NASA Astrophysics Data System (ADS)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  17. Monte Carlo Sampling in Fractal Landscapes

    NASA Astrophysics Data System (ADS)

    Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.

    2013-05-01

    We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.

  18. An automatic and efficient pipeline for disease gene identification through utilizing family-based sequencing data.

    PubMed

    Song, Dandan; Li, Ning; Liao, Lejian

    2015-01-01

    Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.

  19. Development of an online SPE-UHPLC-MS/MS method for the multiresidue analysis of the 17 compounds from the EU "Watch list".

    PubMed

    Gusmaroli, Lucia; Insa, Sara; Petrovic, Mira

    2018-04-24

    During the last decades, the quality of aquatic ecosystems has been threatened by increasing levels of pollutions, caused by the discharge of man-made chemicals, both via accidental release of pollutants as well as a consequence of the constant outflow of inadequately treated wastewater effluents. For this reason, the European Union is updating its legislations with the aim of limiting the release of emerging contaminants. The Commission Implementing Decision (EU) 2015/495 published in March 2015 drafts a "Watch list" of compounds to be monitored Europe-wide. In this study, a methodology based on online solid-phase extraction (SPE) ultra-high-performance liquid chromatography coupled to a triple-quadrupole mass spectrometer (UHPLC-MS/MS) was developed for the simultaneous determination of the 17 compounds listed therein. The proposed method offers advantages over already available methods, such as versatility (all 17 compounds can be analyzed simultaneously), shorter time required for analysis, robustness, and sensitivity. The employment of online sample preparation minimized sample manipulation and reduced dramatically the sample volume needed and time required, dramatically the sample volume needed and time required, thus making the analysis fast and reliable. The method was successfully validated in surface water and influent and effluent wastewater. Limits of detection ranged from sub- to low-nanogram per liter levels, in compliance with the EU limits, with the only exception of EE2. Graphical abstract Schematic of the workflow for the analysis of the Watch list compounds.

  20. Modification of Kolmogorov-Smirnov test for DNA content data analysis through distribution alignment.

    PubMed

    Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan

    2007-10-01

    The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.

  1. Robust Operation of Soft Open Points in Active Distribution Networks with High Penetration of Photovoltaic Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Ji, Haoran; Wang, Chengshan

    Distributed generators (DGs) including photovoltaic panels (PVs) have been integrated dramatically in active distribution networks (ADNs). Due to the strong volatility and uncertainty, the high penetration of PV generation immensely exacerbates the conditions of voltage violation in ADNs. However, the emerging flexible interconnection technology based on soft open points (SOPs) provides increased controllability and flexibility to the system operation. For fully exploiting the regulation ability of SOPs to address the problems caused by PV, this paper proposes a robust optimization method to achieve the robust optimal operation of SOPs in ADNs. A two-stage adjustable robust optimization model is built tomore » tackle the uncertainties of PV outputs, in which robust operation strategies of SOPs are generated to eliminate the voltage violations and reduce the power losses of ADNs. A column-and-constraint generation (C&CG) algorithm is developed to solve the proposed robust optimization model, which are formulated as second-order cone program (SOCP) to facilitate the accuracy and computation efficiency. Case studies on the modified IEEE 33-node system and comparisons with the deterministic optimization approach are conducted to verify the effectiveness and robustness of the proposed method.« less

  2. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  3. Toward allocative efficiency in the prescription drug industry.

    PubMed

    Guell, R C; Fischbaum, M

    1995-01-01

    Traditionally, monopoly power in the pharmaceutical industry has been measured by profits. An alternative method estimates the deadweight loss of consumer surplus associated with the exercise of monopoly power. Although upper and lower bound estimates for this inefficiency are far apart, they at least suggest a dramatically greater welfare loss than measures of industry profitability would imply. A proposed system would have the U.S. government employing its power of eminent domain to "take" and distribute pharmaceutical patents, providing as "just compensation" the present value of the patent's expected future monopoly profits. Given the allocative inefficiency of raising taxes to pay for the program, the impact of the proposal on allocative efficiency would be at least as good at our lower bound estimate of monopoly costs while substantially improving efficiency at or near our upper bound estimate.

  4. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  5. Improving Search Properties in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.; DeWeese, Scott

    1997-01-01

    With the advancing computer processing capabilities, practical computer applications are mostly limited by the amount of human programming required to accomplish a specific task. This necessary human participation creates many problems, such as dramatically increased cost. To alleviate the problem, computers must become more autonomous. In other words, computers must be capable to program/reprogram themselves to adapt to changing environments/tasks/demands/domains. Evolutionary computation offers potential means, but it must be advanced beyond its current practical limitations. Evolutionary algorithms model nature. They maintain a population of structures representing potential solutions to the problem at hand. These structures undergo a simulated evolution by means of mutation, crossover, and a Darwinian selective pressure. Genetic programming (GP) is the most promising example of an evolutionary algorithm. In GP, the structures that evolve are trees, which is a dramatic departure from previously used representations such as strings in genetic algorithms. The space of potential trees is defined by means of their elements: functions, which label internal nodes, and terminals, which label leaves. By attaching semantic interpretation to those elements, trees can be interpreted as computer programs (given an interpreter), evolved architectures, etc. JSC has begun exploring GP as a potential tool for its long-term project on evolving dextrous robotic capabilities. Last year we identified representation redundancies as the primary source of inefficiency in GP. Subsequently, we proposed a method to use problem constraints to reduce those redundancies, effectively reducing GP complexity. This method was implemented afterwards at the University of Missouri. This summer, we have evaluated the payoff from using problem constraints to reduce search complexity on two classes of problems: learning boolean functions and solving the forward kinematics problem. We have also developed and implemented methods to use additional problem heuristics to fine-tune the searchable space, and to use typing information to further reduce the search space. Additional improvements have been proposed, but they are yet to be explored and implemented.

  6. Membrane adaptive optics

    NASA Astrophysics Data System (ADS)

    Marker, Dan K.; Wilkes, James M.; Ruggiero, Eric J.; Inman, Daniel J.

    2005-08-01

    An innovative adaptive optic is discussed that provides a range of capabilities unavailable with either existing, or newly reported, research devices. It is believed that this device will be inexpensive and uncomplicated to construct and operate, with a large correction range that should dramatically relax the static and dynamic structural tolerances of a telescope. As the areal density of a telescope primary is reduced, the optimal optical figure and the structural stiffness are inherently compromised and this phenomenon will require a responsive, range-enhanced wavefront corrector. In addition to correcting for the aberrations in such innovative primary mirrors, sufficient throw remains to provide non-mechanical steering to dramatically improve the Field of regard. Time dependent changes such as thermal disturbances can also be accommodated. The proposed adaptive optic will overcome some of the issues facing conventional deformable mirrors, as well as current and proposed MEMS-based deformable mirrors and liquid crystal based adaptive optics. Such a device is scalable to meter diameter apertures, eliminates high actuation voltages with minimal power consumption, provides long throw optical path correction, provides polychromatic dispersion free operation, dramatically reduces the effects of adjacent actuator influence, and provides a nearly 100% useful aperture. This article will reveal top-level details of the proposed construction and include portions of a static, dynamic, and residual aberration analysis. This device will enable certain designs previously conceived by visionaries in the optical community.

  7. The Sulphur Poisoning Behaviour of Gadolinia Doped Ceria Model Systems in Reducing Atmospheres

    PubMed Central

    Gerstl, Matthias; Nenning, Andreas; Iskandar, Riza; Rojek-Wöckner, Veronika; Bram, Martin; Hutter, Herbert; Opitz, Alexander Karl

    2016-01-01

    An array of analytical methods including surface area determination by gas adsorption using the Brunauer, Emmett, Teller (BET) method, combustion analysis, XRD, ToF-SIMS, TEM and impedance spectroscopy has been used to investigate the interaction of gadolinia doped ceria (GDC) with hydrogen sulphide containing reducing atmospheres. It is shown that sulphur is incorporated into the GDC bulk and might lead to phase changes. Additionally, high concentrations of silicon are found on the surface of model composite microelectrodes. Based on these data, a model is proposed to explain the multi-facetted electrochemical degradation behaviour encountered during long term electrochemical measurements. While electrochemical bulk properties of GDC stay largely unaffected, the surface polarisation resistance is dramatically changed, due to silicon segregation and reaction with adsorbed sulphur. PMID:28773771

  8. Economic analysis for transmission operation and planning

    NASA Astrophysics Data System (ADS)

    Zhou, Qun

    2011-12-01

    Restructuring of the electric power industry has caused dramatic changes in the use of transmission system. The increasing congestion conditions as well as the necessity of integrating renewable energy introduce new challenges and uncertainties to transmission operation and planning. Accurate short-term congestion forecasting facilitates market traders in bidding and trading activities. Cost sharing and recovery issue is a major impediment for long-term transmission investment to integrate renewable energy. In this research, a new short-term forecasting algorithm is proposed for predicting congestion, LMPs, and other power system variables based on the concept of system patterns. The advantage of this algorithm relative to standard statistical forecasting methods is that structural aspects underlying power market operations are exploited to reduce the forecasting error. The advantage relative to previously proposed structural forecasting methods is that data requirements are substantially reduced. Forecasting results based on a NYISO case study demonstrate the feasibility and accuracy of the proposed algorithm. Moreover, a negotiation methodology is developed to guide transmission investment for integrating renewable energy. Built on Nash Bargaining theory, the negotiation of investment plans and payment rate can proceed between renewable generation and transmission companies for cost sharing and recovery. The proposed approach is applied to Garver's six bus system. The numerical results demonstrate fairness and efficiency of the approach, and hence can be used as guidelines for renewable energy investors. The results also shed light on policy-making of renewable energy subsidies.

  9. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  10. 3D Topology Preserving Flows for Viewpoint-Based Cortical Unfolding

    PubMed Central

    Rocha, Kelvin R.; Sundaramoorthi, Ganesh; Yezzi, Anthony J.; Prince, Jerry L.

    2009-01-01

    We present a variational method for unfolding of the cortex based on a user-chosen point of view as an alternative to more traditional global flattening methods, which incur more distortion around the region of interest. Our approach involves three novel contributions. The first is an energy function and its corresponding gradient flow to measure the average visibility of a region of interest of a surface with respect to a given viewpoint. The second is an additional energy function and flow designed to preserve the 3D topology of the evolving surface. The third is a method that dramatically improves the computational speed of the 3D topology preservation approach by creating a tree structure of the 3D surface and using a recursion technique. Experiments results show that the proposed approach can successfully unfold highly convoluted surfaces such as the cortex while preserving their topology. PMID:19960105

  11. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    PubMed

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  12. Enhancing the transmission efficiency by edge deletion in scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Qing; Wang, Di; Li, Guo-Jie

    2007-07-01

    How to improve the transmission efficiency of Internet-like packet switching networks is one of the most important problems in complex networks as well as for the Internet research community. In this paper we propose a convenient method to enhance the transmission efficiency of scale-free networks dramatically by kicking out the edges linking to nodes with large betweenness, which we called the “black sheep.” The advantages of our method are of facility and practical importance. Since the black sheep edges are very costly due to their large bandwidth, our method could decrease the cost as well as gain higher throughput of networks. Moreover, we analyze the curve of the largest betweenness on deleting more and more black sheep edges and find that there is a sharp transition at the critical point where the average degree of the nodes ⟨k⟩→2 .

  13. Dynamic Staffing and Rescheduling in Software Project Management: A Hybrid Approach.

    PubMed

    Ge, Yujia; Xu, Bin

    2016-01-01

    Resource allocation could be influenced by various dynamic elements, such as the skills of engineers and the growth of skills, which requires managers to find an effective and efficient tool to support their staffing decision-making processes. Rescheduling happens commonly and frequently during the project execution. Control options have to be made when new resources are added or tasks are changed. In this paper we propose a software project staffing model considering dynamic elements of staff productivity with a Genetic Algorithm (GA) and Hill Climbing (HC) based optimizer. Since a newly generated reschedule dramatically different from the initial schedule could cause an obvious shifting cost increase, our rescheduling strategies consider both efficiency and stability. The results of real world case studies and extensive simulation experiments show that our proposed method is effective and could achieve comparable performance to other heuristic algorithms in most cases.

  14. Quantum key distribution using basis encoding of Gaussian-modulated coherent states

    NASA Astrophysics Data System (ADS)

    Huang, Peng; Huang, Jingzheng; Zhang, Zheshen; Zeng, Guihua

    2018-04-01

    The continuous-variable quantum key distribution (CVQKD) has been demonstrated to be available in practical secure quantum cryptography. However, its performance is restricted strongly by the channel excess noise and the reconciliation efficiency. In this paper, we present a quantum key distribution (QKD) protocol by encoding the secret keys on the random choices of two measurement bases: the conjugate quadratures X and P . The employed encoding method can dramatically weaken the effects of channel excess noise and reconciliation efficiency on the performance of the QKD protocol. Subsequently, the proposed scheme exhibits the capability to tolerate much higher excess noise and enables us to reach a much longer secure transmission distance even at lower reconciliation efficiency. The proposal can work alternatively to strengthen significantly the performance of the known Gaussian-modulated CVQKD protocol and serve as a multiplier for practical secure quantum cryptography with continuous variables.

  15. Dynamic Staffing and Rescheduling in Software Project Management: A Hybrid Approach

    PubMed Central

    Ge, Yujia; Xu, Bin

    2016-01-01

    Resource allocation could be influenced by various dynamic elements, such as the skills of engineers and the growth of skills, which requires managers to find an effective and efficient tool to support their staffing decision-making processes. Rescheduling happens commonly and frequently during the project execution. Control options have to be made when new resources are added or tasks are changed. In this paper we propose a software project staffing model considering dynamic elements of staff productivity with a Genetic Algorithm (GA) and Hill Climbing (HC) based optimizer. Since a newly generated reschedule dramatically different from the initial schedule could cause an obvious shifting cost increase, our rescheduling strategies consider both efficiency and stability. The results of real world case studies and extensive simulation experiments show that our proposed method is effective and could achieve comparable performance to other heuristic algorithms in most cases. PMID:27285420

  16. Imaging model for the scintillator and its application to digital radiography image enhancement.

    PubMed

    Wang, Qian; Zhu, Yining; Li, Hongwei

    2015-12-28

    Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.

  17. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    PubMed Central

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  18. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less

  19. Ultrafast image-based dynamic light scattering for nanoparticle sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Wu; Zhang, Jie; Liu, Lili

    An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less

  20. Real-time sound speed correction using golden section search to enhance ultrasound imaging quality

    NASA Astrophysics Data System (ADS)

    Yoon, Chong Ook; Yoon, Changhan; Yoo, Yangmo; Song, Tai-Kyong; Chang, Jin Ho

    2013-03-01

    In medical ultrasound imaging, high-performance beamforming is important to enhance spatial and contrast resolutions. A modern receive dynamic beamfomer uses a constant sound speed that is typically assumed to 1540 m/s in generating receive focusing delays [1], [2]. However, this assumption leads to degradation of spatial and contrast resolutions particularly when imaging obese patients or breast since the sound speed is significantly lower than the assumed sound speed [3]; the true sound speed in the fatty tissue is around 1450 m/s. In our previous study, it was demonstrated that the modified nonlinear anisotropic diffusion is capable of determining an optimal sound speed and the proposed method is a useful tool to improve ultrasound image quality [4], [5]. In the previous study, however, we utilized at least 21 iterations to find an optimal sound speed, which may not be viable for real-time applications. In this paper, we demonstrates that the number of iterations can be dramatically reduced using the GSS(golden section search) method with a minimal error. To evaluate performances of the proposed method, in vitro experiments were conducted with a tissue mimicking phantom. To emulate a heterogeneous medium, the phantom was immersed in the water. From the experiments, the number of iterations was reduced from 21 to 7 with GSS method and the maximum error of the lateral resolution between direct and GSS was less than 1%. These results indicate that the proposed method can be implemented in real time to improve the image quality in the medical ultrasound imaging.

  1. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    NASA Astrophysics Data System (ADS)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  2. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  3. An adaptive cryptographic accelerator for network storage security on dynamically reconfigurable platform

    NASA Astrophysics Data System (ADS)

    Tang, Li; Liu, Jing-Ning; Feng, Dan; Tong, Wei

    2008-12-01

    Existing security solutions in network storage environment perform poorly because cryptographic operations (encryption and decryption) implemented in software can dramatically reduce system performance. In this paper we propose a cryptographic hardware accelerator on dynamically reconfigurable platform for the security of high performance network storage system. We employ a dynamic reconfigurable platform based on a FPGA to implement a PowerPCbased embedded system, which executes cryptographic algorithms. To reduce the reconfiguration latency, we apply prefetch scheduling. Moreover, the processing elements could be dynamically configured to support different cryptographic algorithms according to the request received by the accelerator. In the experiment, we have implemented AES (Rijndael) and 3DES cryptographic algorithms in the reconfigurable accelerator. Our proposed reconfigurable cryptographic accelerator could dramatically increase the performance comparing with the traditional software-based network storage systems.

  4. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  5. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  6. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  7. New reversing design method for LED uniform illumination.

    PubMed

    Wang, Kai; Wu, Dan; Qin, Zong; Chen, Fei; Luo, Xiaobing; Liu, Sheng

    2011-07-04

    In light-emitting diode (LED) applications, it is becoming a big issue that how to optimize light intensity distribution curve (LIDC) and design corresponding optical component to achieve uniform illumination when distance-height ratio (DHR) is given. A new reversing design method is proposed to solve this problem, including design and optimization of LIDC to achieve high uniform illumination and a new algorithm of freeform lens to generate the required LIDC by LED light source. According to this method, two new LED modules integrated with freeform lenses are successfully designed for slim direct-lit LED backlighting with thickness of 10mm, and uniformities of illuminance increase from 0.446 to 0.915 and from 0.155 to 0.887 when DHRs are 2 and 3 respectively. Moreover, the number of new LED modules dramatically decreases to 1/9 of the traditional LED modules while achieving similar uniform illumination in backlighting. Therefore, this new method provides a practical and simple way for optical design of LED uniform illumination when DHR is much larger than 1.

  8. Skin necrosis due to antiblastics (procedures of prevention and therapy).

    PubMed

    Villani, C; Pace, S; Tomao, S; Pietrangeli, D; Pucci, G

    1986-01-01

    Among the toxic effects of antitumor drugs the injury for extravasation occurs too. The kinds of damage which result can achieve dramatic features with serious consequences on the psychophysic activity of patients who once tried other kinds of toxicity following chemotherapy. As the extravasation is caused by way of use and by drug-giving methods, we present this accident: it is necessary to observe rigorous rules of procedure during infusion and, if extravasation occurs, to make use of efficient drugs and physical methods. A recent case of extravasation occurred, at our Gynecological Oncology Service, in a patient who carried out chemotherapy without hospitalization. This case is here proposed and discussed; we think that the rarity of this accident is in our experience, due to the several precautions we observe during the infusion of drug.

  9. In situ precision electrospinning as an effective delivery technique for cyanoacrylate medical glue with high efficiency and low toxicity.

    PubMed

    Dong, R H; Qin, C C; Qiu, X; Yan, X; Yu, M; Cui, L; Zhou, Y; Zhang, H D; Jiang, X Y; Long, Y Z

    2015-12-14

    The side effects or toxicity of cyanoacrylate used in vivo have been argued since its first application in wound closure. We propose an airflow-assisted in situ precision electrospinning apparatus as an applicator and make a detailed comparison with traditional spraying via in vitro and in vivo experiments. This novel method can not only improve operational performance and safety by precisely depositing cyanoacrylate fibers onto a wound, but significantly reduce the dosage of cyanoacrylate by almost 80%. A white blood cell count, liver function test and histological analysis prove that the in situ precision electrospinning applicator produces a better postoperative outcome, e.g., minor hepatocyte injury, moderate inflammation and the significant ability for liver regeneration. This in situ precision electrospinning method may thus dramatically broaden both civilian and military applications of cyanoacrylates.

  10. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  11. An analysis of secular trends in method-specific suicides in Japan, 1950-1975.

    PubMed

    Yoshioka, Eiji; Saijo, Yasuaki; Kawachi, Ichiro

    2017-04-05

    In Japan, a dramatic rise in suicide rates was observed in the 1950s, especially among the younger population, and then the rate decreased rapidly again in the 1960s. The aim of this study was to assess secular trends in method-specific suicides by gender and age in Japan between 1950 and 1975. We paid special attention to suicides by poisoning (solid and liquid substances), and their contribution to dramatic swings in the overall suicide rate in Japan during the 1950s and 1960s. Mortality and population data were obtained from the Vital Statistics of Japan and Statistics Bureau, Ministry of Internal Affairs and Communications in Japan, respectively. We calculated method-specific age-standardized suicide rates by gender and age group (15-29, 30-49, or 50+ years). The change in the suicide rate during the research period was larger in males than females in all age groups, and was more marked among people aged 15-29 years compared to those aged 30-49 years and 50 years or over. Poisoning by solid and liquid substances overwhelmingly contributed to the dramatic change in the overall suicide rates in males and females aged 15-49 years in the 1950s and 1960s. For the peak years of the rise in poisoning suicides, bromide was the most frequently used substance. Our results for the 1950s and 1960s in Japan illustrated how assessing secular trends in method-specific suicides by gender and age could provide a deeper understanding of the dramatic swings in overall suicide rate. Although rapid increases or decreases in suicide rates have been also observed in some countries or regions recently, trends in method-specific suicides have not been analyzed because of a lack of data on method-specific suicide in many countries. Our study illustrates how the collection and analysis of method-specific data can contribute to an understanding of dramatic shifts in national suicide rates.

  12. Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety

    PubMed Central

    Jiang, Wen; Huang, Yulin; Yang, Jianyu

    2016-01-01

    Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD) result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method. PMID:27399714

  13. The evaluation model of the design of toll

    NASA Astrophysics Data System (ADS)

    Feng, Shuting

    2018-04-01

    In recent years, the dramatic increase in traffic burden has highlighted the necessity of rational allocation of toll plaza. At the same time, the need to consider a lot of factors has enhanced the design requirements. In this background, we carry out research on this subject. We propose a reasonable assumption, and abstract the toll plaza into a model only related to B and L. By using the queuing theory and traffic flow theory, we define the throughput, cost and accident prevent with B and L to acquire the base model. By using the method of linear weighting in economics to calculate this model, the optimal B and L strategies are obtained.

  14. Topological phenomena in classical optical networks

    PubMed Central

    Shi, T.; Kimble, H. J.; Cirac, J. I.

    2017-01-01

    We propose a scheme to realize a topological insulator with optical-passive elements and analyze the effects of Kerr nonlinearities in its topological behavior. In the linear regime, our design gives rise to an optical spectrum with topological features and where the bandwidths and bandgaps are dramatically broadened. The resulting edge modes cover a very wide frequency range. We relate this behavior to the fact that the effective Hamiltonian describing the system’s amplitudes is long range. We also develop a method to analyze the scheme in the presence of a Kerr medium. We assess robustness and stability of the topological features and predict the presence of chiral squeezed fluctuations at the edges in some parameter regimes. PMID:29073093

  15. An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.

  16. Accessing Transgenerational Themes Through Dreamwork.

    ERIC Educational Resources Information Center

    Andrews, Jennifer; And Others

    1988-01-01

    Proposes use of dreamwork to evoke historical patterns or transgenerational themes. Describes new variant of dreamwork which combines aspects of both gestalt and family systems therapies. Implications of therapeutic dramatization for couple therapy are suggested. Examples are included. (Author/NB)

  17. Catalyzed formation of α,β-unsaturated ketones or aldehydes from propargylic acetates by a recoverable and recyclable nanocluster catalyst

    NASA Astrophysics Data System (ADS)

    Li, Man-Bo; Tian, Shi-Kai; Wu, Zhikun

    2014-05-01

    An active, recoverable, and recyclable nanocluster catalyst, Au25(SR)18-, has been developed to catalyze the formation of α,β-unsaturated ketones or aldehydes from propargylic acetates. The catalytic process has been proposed to be initialized by an SN2' addition of OH-. Moreover, a dramatic solvent effect was observed, for which a rational explanation was provided.An active, recoverable, and recyclable nanocluster catalyst, Au25(SR)18-, has been developed to catalyze the formation of α,β-unsaturated ketones or aldehydes from propargylic acetates. The catalytic process has been proposed to be initialized by an SN2' addition of OH-. Moreover, a dramatic solvent effect was observed, for which a rational explanation was provided. Electronic supplementary information (ESI) available: Experimental procedures, UV-Vis spectra and fluorescence spectra of catalysts, characterization data, and copies of MS spectra. See DOI: 10.1039/c4nr00658e

  18. Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning.

    PubMed

    Song, Ying; Zhu, Zhen; Lu, Yang; Liu, Qiegen; Zhao, Jun

    2014-03-01

    To improve the magnetic resonance imaging (MRI) data acquisition speed while maintaining the reconstruction quality, a novel method is proposed for multislice MRI reconstruction from undersampled k-space data based on compressed-sensing theory using dictionary learning. There are two aspects to improve the reconstruction quality. One is that spatial correlation among slices is used by extending the atoms in dictionary learning from patches to blocks. The other is that the dictionary-learning scheme is used at two resolution levels; i.e., a low-resolution dictionary is used for sparse coding and a high-resolution dictionary is used for image updating. Numerical experiments are carried out on in vivo 3D MR images of brains and abdomens with a variety of undersampling schemes and ratios. The proposed method (dual-DLMRI) achieves better reconstruction quality than conventional reconstruction methods, with the peak signal-to-noise ratio being 7 dB higher. The advantages of the dual dictionaries are obvious compared with the single dictionary. Parameter variations ranging from 50% to 200% only bias the image quality within 15% in terms of the peak signal-to-noise ratio. Dual-DLMRI effectively uses the a priori information in the dual-dictionary scheme and provides dramatically improved reconstruction quality. Copyright © 2013 Wiley Periodicals, Inc.

  19. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    PubMed

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  1. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  2. LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    PubMed Central

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188

  3. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  4. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  5. Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey

    NASA Astrophysics Data System (ADS)

    Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin

    2018-04-01

    Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f-v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.

  6. Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey

    NASA Astrophysics Data System (ADS)

    Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin

    2018-07-01

    Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f- v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.

  7. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  8. A self-adaptive toll rate algorithm for high occupancy toll (HOT) lane operations.

    DOT National Transportation Integrated Search

    2009-12-01

    Dramatically increasing travel demands and insufficient traffic facility supplies have resulted in severe : traffic congestion. High Occupancy Toll (HOT) lane operations have been proposed as one of the most : applicable and cost-effective countermea...

  9. Time-reversal transcranial ultrasound beam focusing using a k-space method

    PubMed Central

    Jing, Yun; Meral, F. Can; Clement, Greg. T.

    2012-01-01

    This paper proposes the use of a k-space method to obtain the correction for transcranial ultrasound beam focusing. Mirroring past approaches, A synthetic point source at the focal point is numerically excited, and propagated through the skull, using acoustic properties acquired from registered computed tomograpy of the skull being studied. The received data outside the skull contains the correction information and can be phase conjugated (time reversed) and then physically generated to achieve a tight focusing inside the skull, by assuming quasi-plane transmission where shear waves are not present or their contribution can be neglected. Compared with the conventional finite-difference time-domain method for wave propagation simulation, it will be shown that the k-space method is significantly more accurate even for a relatively coarse spatial resolution, leading to a dramatically reduced computation time. Both numerical simulations and experiments conducted on an ex vivo human skull demonstrate that, precise focusing can be realized using the k-space method with a spatial resolution as low as only 2.56 grid points per wavelength, thus allowing treatment planning computation on the order of minutes. PMID:22290477

  10. Research on assessment methods for urban public transport development in China.

    PubMed

    Zou, Linghong; Dai, Hongna; Yao, Enjian; Jiang, Tian; Guo, Hongwei

    2014-01-01

    In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method.

  11. Efficient Multi-Atlas Registration using an Intermediate Template Image

    PubMed Central

    Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.

    2017-01-01

    Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3–4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects. PMID:28943702

  12. Efficient multi-atlas registration using an intermediate template image

    NASA Astrophysics Data System (ADS)

    Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.

    2017-03-01

    Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3-4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects.

  13. MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.

    PubMed

    Huang, Zhengxing; Chan, Tak-Ming; Dong, Wei

    2017-02-01

    Major adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data. The proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model. We verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively. We consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.

    PubMed

    Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing

    2015-11-01

    Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.

  16. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  17. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  18. Skill networks and measures of complex human capital

    PubMed Central

    2017-01-01

    We propose a network-based method for measuring worker skills. We illustrate the method using data from an online freelance website. Using the tools of network analysis, we divide skills into endogenous categories based on their relationship with other skills in the market. Workers who specialize in these different areas earn dramatically different wages. We then show that, in this market, network-based measures of human capital provide additional insight into wages beyond traditional measures. In particular, we show that workers with diverse skills earn higher wages than those with more specialized skills. Moreover, we can distinguish between two different types of workers benefiting from skill diversity: jacks-of-all-trades, whose skills can be applied independently on a wide range of jobs, and synergistic workers, whose skills are useful in combination and fill a hole in the labor market. On average, workers whose skills are synergistic earn more than jacks-of-all-trades. PMID:29133397

  19. Protecting Privacy and Securing the Gathering of Location Proofs - The Secure Location Verification Proof Gathering Protocol

    NASA Astrophysics Data System (ADS)

    Graham, Michelle; Gray, David

    As wireless networks become increasingly ubiquitous, the demand for a method of locating a device has increased dramatically. Location Based Services are now commonplace but there are few methods of verifying or guaranteeing a location provided by a user without some specialised hardware, especially in larger scale networks. We propose a system for the verification of location claims, using proof gathered from neighbouring devices. In this paper we introduce a protocol to protect this proof gathering process, protecting the privacy of all involved parties and securing it from intruders and malicious claiming devices. We present the protocol in stages, extending the security of this protocol to allow for flexibility within its application. The Secure Location Verification Proof Gathering Protocol (SLVPGP) has been designed to function within the area of Vehicular Networks, although its application could be extended to any device with wireless & cryptographic capabilities.

  20. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.

    PubMed

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-07-24

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.

  1. A nuclear method to authenticate Buddha images

    NASA Astrophysics Data System (ADS)

    Khaweerat, S.; Ratanatongchai, W.; Channuie, J.; Wonglee, S.; Picha, R.; Promping, J.; Silva, K.; Liamsuwan, T.

    2015-05-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 105 n cm-2s-1 was applied. NAAR needed a higher neutron flux of 1012 n cm-2 s-1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand.

  2. Removal of the Gibbs phenomenon and its application to fast-Fourier-transform-based mode solvers.

    PubMed

    Wangüemert-Pérez, J G; Godoy-Rubio, R; Ortega-Moñux, A; Molina-Fernández, I

    2007-12-01

    A simple strategy for accurately recovering discontinuous functions from their Fourier series coefficients is presented. The aim of the proposed approach, named spectrum splitting (SS), is to remove the Gibbs phenomenon by making use of signal-filtering-based concepts and some properties of the Fourier series. While the technique can be used in a vast range of situations, it is particularly suitable for being incorporated into fast-Fourier-transform-based electromagnetic mode solvers (FFT-MSs), which are known to suffer from very poor convergence rates when applied to situations where the field distributions are highly discontinuous (e.g., silicon-on-insulator photonic wires). The resultant method, SS-FFT-MS, is exhaustively tested under the assumption of a simplified one-dimensional model, clearly showing a dramatic improvement of the convergence rates with respect to the original FFT-based methods.

  3. Improved retention of phosphorus donors in germanium using a non-amorphizing fluorine co-implantation technique

    NASA Astrophysics Data System (ADS)

    Monmeyran, Corentin; Crowe, Iain F.; Gwilliam, Russell M.; Heidelberger, Christopher; Napolitani, Enrico; Pastor, David; Gandhi, Hemi H.; Mazur, Eric; Michel, Jürgen; Agarwal, Anuradha M.; Kimerling, Lionel C.

    2018-04-01

    Co-doping with fluorine is a potentially promising method for defect passivation to increase the donor electrical activation in highly doped n-type germanium. However, regular high dose donor-fluorine co-implants, followed by conventional thermal treatment of the germanium, typically result in a dramatic loss of the fluorine, as a result of the extremely large diffusivity at elevated temperatures, partly mediated by the solid phase epitaxial regrowth. To circumvent this problem, we propose and experimentally demonstrate two non-amorphizing co-implantation methods; one involving consecutive, low dose fluorine implants, intertwined with rapid thermal annealing and the second, involving heating of the target wafer during implantation. Our study confirms that the fluorine solubility in germanium is defect-mediated and we reveal the extent to which both of these strategies can be effective in retaining large fractions of both the implanted fluorine and, critically, phosphorus donors.

  4. A first-digit anomaly in the 2009 Iranian presidential election

    NASA Astrophysics Data System (ADS)

    Roukema, Boudewijn F.

    2014-01-01

    A local bootstrap method is proposed for the analysis of electoral vote-count first-digit frequencies, complementing the Benford's Law limit. The method is calibrated on five presidential-election first rounds (2002-2006) and applied to the 2009 Iranian presidential-election first round. Candidate K has a highly significant (p < 0.15%) excess of vote counts starting with the digit 7. This leads to other anomalies, two of which are individually significant at p˜ 0.1%, and one at p sim 1%. Independently, Iranian pre-election opinion polls significantly reject the official results unless the five polls favouring candidate A are considered alone. If the latter represent normalised data and a linear, least-squares, equal-weighted fit is used, then either candidates R and K suffered a sudden, dramatic (70%pm 15%) loss of electoral support just prior to the election, or the official results are rejected (p ˜ 0.01%).

  5. Van der Waals Interactions of Organic Molecules on Semiconductor and Metal Surfaces: a Comparative Study

    NASA Astrophysics Data System (ADS)

    Li, Guo; Cooper, Valentino; Cho, Jun-Hyung; Tamblyn, Isaac; Du, Shixuan; Neaton, Jeffrey; Gao, Hong-Jun; Zhang, Zhenyu

    2012-02-01

    We present a comparative investigation of vdW interactions of the organic molecules on semiconductor and metal surfaces using the DFT method implemented with vdW-DF. For styrene/H-Si(100), the vdW interactions reverse the effective intermolecular interaction from repulsive to attractive, ensuring preferred growth of long wires as observed experimentally. We further propose that an external E field and the selective creation of Si dangling bonds can drastically improve the ordered arrangement of the molecular nanowires [1]. For BDA/Au(111), the vdW interactions not only dramatically enhances the adsorption energies, but also significantly changes the molecular configurations. In the azobenzene/Ag(111) system, vdW-DF produces superior predictions for the adsorption energy than those obtained with other vdW corrected DFT approaches, providing evidence for the applicability of the vdW-DF method [2].

  6. Streptomyces species: Ideal chassis for natural product discovery and overproduction.

    PubMed

    Liu, Ran; Deng, Zixin; Liu, Tiangang

    2018-05-28

    There is considerable interest in mining organisms for new natural products (NPs) and in improving methods to overproduce valuable NPs. Because of the rapid development of tools and strategies for metabolic engineering and the markedly increased knowledge of the biosynthetic pathways and genetics of NP-producing organisms, genome mining and overproduction of NPs can be dramatically accelerated. In particular, Streptomyces species have been proposed as suitable chassis organisms for NP discovery and overproduction because of their many unique characteristics not shared with yeast, Escherichia coli, or other microorganisms. In this review, we summarize the methods for genome sequencing, gene cluster prediction, and gene editing in Streptomyces, as well as metabolic engineering strategies for NP overproduction and approaches for generating new products. Finally, two strategies for utilizing Streptomyces as the chassis for NP discovery and overproduction are emphasized. Copyright © 2018 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  7. Ultimate Longitudinal Strength of Composite Ship Hulls

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Huang, Lingkai; Zhu, Libao; Tang, Yuhang; Wang, Anwen

    2017-01-01

    A simple analytical model to estimate the longitudinal strength of ship hulls in composite materials under buckling, material failure and ultimate collapse is presented in this paper. Ship hulls are regarded as assemblies of stiffened panels which idealized as group of plate-stiffener combinations. Ultimate strain of the plate-stiffener combination is predicted under buckling or material failure with composite beam-column theory. The effects of initial imperfection of ship hull and eccentricity of load are included. Corresponding longitudinal strengths of ship hull are derived in a straightforward method. A longitudinally framed ship hull made of symmetrically stacked unidirectional plies under sagging is analyzed. The results indicate that present analytical results have a good agreement with FEM method. The initial deflection of ship hull and eccentricity of load can dramatically reduce the bending capacity of ship hull. The proposed formulations provide a simple but useful tool for the longitudinal strength estimation in practical design.

  8. Meshfree and efficient modeling of swimming cells

    NASA Astrophysics Data System (ADS)

    Gallagher, Meurig T.; Smith, David J.

    2018-05-01

    Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.

  9. Retinal identification based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform.

    PubMed

    Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming

    2013-07-18

    Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  10. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    PubMed Central

    Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming

    2013-01-01

    Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes. PMID:23873409

  11. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  12. Lagrangian methods of cosmic web classification

    NASA Astrophysics Data System (ADS)

    Fisher, J. D.; Faltenbacher, A.; Johnson, M. S. T.

    2016-05-01

    The cosmic web defines the large-scale distribution of matter we see in the Universe today. Classifying the cosmic web into voids, sheets, filaments and nodes allows one to explore structure formation and the role environmental factors have on halo and galaxy properties. While existing studies of cosmic web classification concentrate on grid-based methods, this work explores a Lagrangian approach where the V-web algorithm proposed by Hoffman et al. is implemented with techniques borrowed from smoothed particle hydrodynamics. The Lagrangian approach allows one to classify individual objects (e.g. particles or haloes) based on properties of their nearest neighbours in an adaptive manner. It can be applied directly to a halo sample which dramatically reduces computational cost and potentially allows an application of this classification scheme to observed galaxy samples. Finally, the Lagrangian nature admits a straightforward inclusion of the Hubble flow negating the necessity of a visually defined threshold value which is commonly employed by grid-based classification methods.

  13. Making the Constitution Meaningful.

    ERIC Educational Resources Information Center

    Pelow, Randall A.

    1989-01-01

    Describes learning activities based on the U.S. Constitution that enhance higher level thinking skills in elementary students. One activity proposes a hypothetical constitutional amendment banning Saturday cartoons; a second taxes children's earnings; and other activities focus on dramatizing events surrounding the Constitutional Convention. (LS)

  14. Accurate segmenting of cervical tumors in PET imaging based on similarity between adjacent slices.

    PubMed

    Chen, Liyuan; Shen, Chenyang; Zhou, Zhiguo; Maquilan, Genevieve; Thomas, Kimberly; Folkert, Michael R; Albuquerque, Kevin; Wang, Jing

    2018-06-01

    Because in PET imaging cervical tumors are close to the bladder with high capacity for the secreted 18 FDG tracer, conventional intensity-based segmentation methods often misclassify the bladder as a tumor. Based on the observation that tumor position and area do not change dramatically from slice to slice, we propose a two-stage scheme that facilitates segmentation. In the first stage, we used a graph-cut based algorithm to obtain initial contouring of the tumor based on local similarity information between voxels; this was achieved through manual contouring of the cervical tumor on one slice. In the second stage, initial tumor contours were fine-tuned to more accurate segmentation by incorporating similarity information on tumor shape and position among adjacent slices, according to an intensity-spatial-distance map. Experimental results illustrate that the proposed two-stage algorithm provides a more effective approach to segmenting cervical tumors in 3D 18 FDG PET images than the benchmarks used for comparison. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Ontology-supported research on vaccine efficacy, safety and integrative biological networks.

    PubMed

    He, Yongqun

    2014-07-01

    While vaccine efficacy and safety research has dramatically progressed with the methods of in silico prediction and data mining, many challenges still exist. A formal ontology is a human- and computer-interpretable set of terms and relations that represent entities in a specific domain and how these terms relate to each other. Several community-based ontologies (including Vaccine Ontology, Ontology of Adverse Events and Ontology of Vaccine Adverse Events) have been developed to support vaccine and adverse event representation, classification, data integration, literature mining of host-vaccine interaction networks, and analysis of vaccine adverse events. The author further proposes minimal vaccine information standards and their ontology representations, ontology-based linked open vaccine data and meta-analysis, an integrative One Network ('OneNet') Theory of Life, and ontology-based approaches to study and apply the OneNet theory. In the Big Data era, these proposed strategies provide a novel framework for advanced data integration and analysis of fundamental biological networks including vaccine immune mechanisms.

  16. Ontology-supported Research on Vaccine Efficacy, Safety, and Integrative Biological Networks

    PubMed Central

    He, Yongqun

    2016-01-01

    Summary While vaccine efficacy and safety research has dramatically progressed with the methods of in silico prediction and data mining, many challenges still exist. A formal ontology is a human- and computer-interpretable set of terms and relations that represent entities in a specific domain and how these terms relate to each other. Several community-based ontologies (including the Vaccine Ontology, Ontology of Adverse Events, and Ontology of Vaccine Adverse Events) have been developed to support vaccine and adverse event representation, classification, data integration, literature mining of host-vaccine interaction networks, and analysis of vaccine adverse events. The author further proposes minimal vaccine information standards and their ontology representations, ontology-based linked open vaccine data and meta-analysis, an integrative One Network (“OneNet”) Theory of Life, and ontology-based approaches to study and apply the OneNet theory. In the Big Data era, these proposed strategies provide a novel framework for advanced data integration and analysis of fundamental biological networks including vaccine immune mechanisms. PMID:24909153

  17. Joint deconvolution and classification with applications to passive acoustic underwater multipath.

    PubMed

    Anderson, Hyrum S; Gupta, Maya R

    2008-11-01

    This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.

  18. Energy Efficient Communication Using Relationships between Biological Signals for Ubiquitous Health Monitoring

    NASA Astrophysics Data System (ADS)

    Lee, Songjun; Na, Doosu; Koo, Bonmin

    Wireless sensor networks with a star network topology are commonly applied for health monitoring systems. To determine the condition of a patient, sensor nodes are attached to the body to transmit the data to a coordinator. However, this process is inefficient because the coordinator is always communicating with each sensor node resulting in a data processing workload for the coordinator that becomes much greater than that of the sensor nodes. In this paper, a method is proposed to reduce the number of data transmissions from the sensor nodes to the coordinator by establishing a threshold for data from the biological signals to ensure that only relevant information is transmitted. This results in a dramatic reduction in power consumption throughout the entire network.

  19. Space-based detection of space debris by photometric and polarimetric characteristics

    NASA Astrophysics Data System (ADS)

    Pang, Shuxia; Wang, Hu; Lu, Xiaoyun; Shen, Yang; Pan, Yue

    2017-10-01

    The number of space debris has been increasing dramatically in the last few years, and is expected to increase as much in the future. As the orbital debris population grows, the risk of collision between debris and other orbital objects also grows. Therefore, space debris detection is a particularly important task for space environment security, and then supports for space debris modeling, protection and mitigation. This paper aims to review space debris detection systematically and completely. Firstly, the research status of space debris detection at home and abroad is presented. Then, three kinds of optical observation methods of space debris are summarized. Finally, we propose a space-based detection scheme for space debris by photometric and polarimetric characteristics.

  20. Improving performance with knowledge management

    NASA Astrophysics Data System (ADS)

    Kim, Sangchul

    2018-06-01

    People and organization are unable to easily locate their experience and knowledge, so meaningful data is usually fragmented, unstructured, not up-to-date and largely incomplete. Poor knowledge management (KM) leaves a company weak to their knowledge-base - or intellectual capital - walking out of the door each year, that is minimum estimated at 10%. Knowledge management (KM) can be defined as an emerging set of organizational design and operational principles, processes, organizational structures, applications and technologies that helps knowledge workers dramatically leverage their creativity and ability to deliver business value and to reap finally a competitive advantage. Then, this paper proposed various method and software starting with an understanding of the enterprise aspect, and gave inspiration to those who wanted to use KM.

  1. Fast computation of hologram patterns of a 3D object using run-length encoding and novel look-up table methods.

    PubMed

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-02-20

    In this paper we propose a new approach for fast generation of computer-generated holograms (CGHs) of a 3D object by using the run-length encoding (RLE) and the novel look-up table (N-LUT) methods. With the RLE method, spatially redundant data of a 3D object are extracted and regrouped into the N-point redundancy map according to the number of the adjacent object points having the same 3D value. Based on this redundancy map, N-point principle fringe patterns (PFPs) are newly calculated by using the 1-point PFP of the N-LUT, and the CGH pattern for the 3D object is generated with these N-point PFPs. In this approach, object points to be involved in calculation of the CGH pattern can be dramatically reduced and, as a result, an increase of computational speed can be obtained. Some experiments with a test 3D object are carried out and the results are compared to those of the conventional methods.

  2. Robust Polypropylene Fabrics Super-Repelling Various Liquids: A Simple, Rapid and Scalable Fabrication Method by Solvent Swelling.

    PubMed

    Zhu, Tang; Cai, Chao; Duan, Chunting; Zhai, Shuai; Liang, Songmiao; Jin, Yan; Zhao, Ning; Xu, Jian

    2015-07-01

    A simple, rapid (10 s) and scalable method to fabricate superhydrophobic polypropylene (PP) fabrics is developed by swelling the fabrics in cyclohexane/heptane mixture at 80 °C. The recrystallization of the swollen macromolecules on the fiber surface contributes to the formation of submicron protuberances, which increase the surface roughness dramatically and result in superhydrophobic behavior. The superhydrophobic PP fabrics possess excellent repellency to blood, urine, milk, coffee, and other common liquids, and show good durability and robustness, such as remarkable resistances to water penetration, abrasion, acidic/alkaline solution, and boiling water. The excellent comprehensive performance of the superhydrophobic PP fabrics indicates their potential applications as oil/water separation materials, protective garments, diaper pads, or other medical and health supplies. This simple, fast and low cost method operating at a relatively low temperature is superior to other reported techniques for fabricating superhydrophobic PP materials as far as large scale manufacturing is considered. Moreover, the proposed method is applicable for preparing superhydrophobic PP films and sheets as well.

  3. What Is Everyday Ethics? A Review and a Proposal for an Integrative Concept.

    PubMed

    Zizzo, Natalie; Bell, Emily; Racine, Eric

    2016-01-01

    "Everyday ethics" is a term that has been used in the clinical and ethics literature for decades to designate normatively important and pervasive issues in healthcare. In spite of its importance, the term has not been reviewed and analyzed carefully. We undertook a literature review to understand how the term has been employed and defined, finding that it is often contrasted to "dramatic ethics." We identified the core attributes most commonly associated with everyday ethics. We then propose an integrative model of everyday ethics that builds on the contribution of different ethical theories. This model proposes that the function of everyday ethics is to serve as an integrative concept that (1) helps to detect current blind spots in bioethics (that is, shifts the focus from dramatic ethics) and (2) mobilizes moral agents to address these shortcomings of ethical insight. This novel integrative model has theoretical, methodological, practical, and pedagogical implications, which we explore. Because of the pivotal role that moral experience plays in this integrative model, the model could help to bridge empirical ethics research with more conceptual and normative work. Copyright 2016 The Journal of Clinical Ethics. All rights reserved.

  4. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries.

    PubMed

    Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.

  5. Ready for Prime Time

    ERIC Educational Resources Information Center

    Fulcher, Roxanne; Honore, Peggy; Kirkwood, Brenda; Riegelman, Richard

    2010-01-01

    Public health education is not just for graduate students anymore. The movement toward integrating public health into the education of undergraduates is rapidly evolving. Healthy People, a public-private consortium of more than 400 health-related organizations, has proposed an objective for 2020 that could dramatically increase public health…

  6. Outlook. Number 353

    ERIC Educational Resources Information Center

    Council for American Private Education, 2010

    2010-01-01

    Council for American Private Education (CAPE) is a coalition of national associations serving private schools K-12. "Outlook" is published monthly by CAPE. This issue contains the following articles: (1) Obama Budget Proposes Dramatic Changes for ESEA (Elementary and Secondary Education Act); (2) Push Continues for DC Voucher Program;…

  7. Drench effects of media portrayal of fatal virus disease on health locus of control beliefs.

    PubMed

    Bahk, C M

    2001-01-01

    Drawing on the notion of the drench hypothesis proposed by Greenberg (1988), the author proposes a preliminary theoretical framework to explain "drenching" effects of dramatic media. Three drench variables-perceived realism, role identification, and media involvement-were identified and tested regarding their role in mediating the impact of virus disease portrayals on health locus-of-control belief orientations. Participants in the experimental condition watched the movie Outbreak (a portrayal of an outbreak of a deadly virus disease). Perceived realism, role identification, and media involvement were measured concerning the movie depiction of the virus disease. The findings indicate that the dramatized portrayal significantly weakened the viewers' beliefs in self-controllability over health and strengthened their beliefs in chance outcomes of health. Beliefs in provider control over health were affected by the viewers' perception of realism regarding the movie portrayals. Effects of role identification were different between male and female viewers. The results are discussed in relation to drench analysis as a theoretical approach to media effects.

  8. Interaction Potency of Single-Walled Carbon Nanotubes with DNAs: A Novel Assay for Assessment of Hazard Risk

    PubMed Central

    Yao, Chunhe; Carlisi, Cristina; Li, Yuning; Chen, Da; Ding, Jianfu; Feng, Yong-Lai

    2016-01-01

    Increasing use of single-walled carbon nanotubes (SWCNTs) necessitates a novel method for hazard risk assessment. In this work, we investigated the interaction of several types of commercial SWCNTs with single-stranded (ss) and double-stranded (ds) DNA oligonucleotides (20-mer and 20 bp). Based on the results achieved, we proposed a novel assay that employed the DNA interaction potency to assess the hazard risk of SWCNTs. It was found that SWCNTs in different sizes or different batches of the same product number of SWCNTs showed dramatically different potency of interaction with DNAs. In addition, the same SWCNTs also exerted strikingly different interaction potency with ss- versus ds- DNAs. The interaction rates of SWCNTs with DNAs were investigated, which could be utilized as the indicator of potential hazard for acute exposure. Compared to solid SWCNTs, the SWCNTs dispersed in liquid medium (2% sodium cholate solution) exhibited dramatically different interaction potency with DNAs. This indicates that the exposure medium may greatly influence the subsequent toxicity and hazard risk produced by SWCNTs. Based on the findings of dose-dependences and time-dependences from the interactions between SWCNTs and DNAs, a new chemistry based assay for hazard risk assessment of nanomaterials including SWCNTs has been presented. PMID:27936089

  9. Interaction Potency of Single-Walled Carbon Nanotubes with DNAs: A Novel Assay for Assessment of Hazard Risk.

    PubMed

    Yao, Chunhe; Carlisi, Cristina; Li, Yuning; Chen, Da; Ding, Jianfu; Feng, Yong-Lai

    2016-01-01

    Increasing use of single-walled carbon nanotubes (SWCNTs) necessitates a novel method for hazard risk assessment. In this work, we investigated the interaction of several types of commercial SWCNTs with single-stranded (ss) and double-stranded (ds) DNA oligonucleotides (20-mer and 20 bp). Based on the results achieved, we proposed a novel assay that employed the DNA interaction potency to assess the hazard risk of SWCNTs. It was found that SWCNTs in different sizes or different batches of the same product number of SWCNTs showed dramatically different potency of interaction with DNAs. In addition, the same SWCNTs also exerted strikingly different interaction potency with ss- versus ds- DNAs. The interaction rates of SWCNTs with DNAs were investigated, which could be utilized as the indicator of potential hazard for acute exposure. Compared to solid SWCNTs, the SWCNTs dispersed in liquid medium (2% sodium cholate solution) exhibited dramatically different interaction potency with DNAs. This indicates that the exposure medium may greatly influence the subsequent toxicity and hazard risk produced by SWCNTs. Based on the findings of dose-dependences and time-dependences from the interactions between SWCNTs and DNAs, a new chemistry based assay for hazard risk assessment of nanomaterials including SWCNTs has been presented.

  10. Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation

    PubMed Central

    Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-01-01

    Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182

  11. Deductive derivation and turing-computerization of semiparametric efficient estimation.

    PubMed

    Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-12-01

    Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.

  12. A new comprehensive index for drought monitoring with TM data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan

    2017-10-01

    Drought is one of the most important and frequent natural hazards to agriculture production in North China Plain. To improve agriculture water management, accurate drought monitoring information is needed. This study proposed a method for comprehensive drought monitoring by combining a meteorological index and three satellite drought indices of TM data together. SPI (Standard Precipitation Index), the meteorological drought index, is used to measure precipitation deficiency. Three satellite drought indices (Temperature Vegetation Drought Index, Land Surface Water Index, Modified Perpendicular Drought Index) are used to evaluate agricultural drought risk by exploring data from various channels (VIS, NIR, SWIR, TIR). Considering disparities in data ranges of different drought indices, normalization is implemented before combination. First, SPI is normalized to 0 — 100 given that its normal range is -4 - +4. Then, the three satellite drought indices are normalized to 0 - 100 according to the maximum and minimum values in the image, and aggregated using weighted average method (the result is denoted as ADI, Aggregated drought index). Finally, weighed geometric mean of SPI and ADI are calculated (the result is denoted as DIcombined). A case study in North China plain using three TM images acquired during April-May 2007 show that the method proposed in this study is effective. In spatial domain, DIcombined demonstrates dramatically more details than SPI; in temporal domain, DIcombined shows more reasonable drought development trajectory than satellite indices that are derived from independent TM images.

  13. Multisyringe flow injection analysis hyphenated with liquid core waveguides for the development of cleaner spectroscopic analytical methods: improved determination of chloride in waters.

    PubMed

    Maya, Fernando; Estela, José Manuel; Cerdà, Víctor

    2009-07-01

    In this work, the hyphenation of the multisyringe flow injection analysis technique with a 100-cm-long pathlength liquid core waveguide has been accomplished. The Cl-/Hg(SCN)2/Fe3+ reaction system for the spectrophotometric determination of chloride (Cl(-)) in waters was used as chemical model. As a result, this classic analytical methodology has been improved, minimizing dramatically the consumption of reagents, in particular, that of the highly biotoxic chemical Hg(SCN)2. The proposed method features a linear dynamic range composed of two steps between (1) 0.2-2 and (2) 2-8 mg Cl- L(-1), thus extended applicability due to on-line sample dilution (up to 400 mg Cl- L(-1)). It also presents improved limits of detection and quantification of 0.06 and 0.20 mg Cl- L(-1), respectively. The coefficient of variation and the injection throughput were 1.3% (n = 10, 2 mg Cl- L(-1)) and 21 h(-1). Furthermore, a very low consumption of reagents per Cl- determination of 0.2 microg Hg(II) and 28 microg Fe3+ has been achieved. The method was successfully applied to the determination of Cl- in different types of water samples. Finally, the proposed system is critically compared from a green analytical chemistry point of view against other flow systems for the same purpose.

  14. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    PubMed

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The Development of Evaluation Model for Internal Quality Assurance System of Dramatic Arts College of Bunditpattanasilpa Institute

    ERIC Educational Resources Information Center

    Sinthukhot, Kittisak; Srihamongkol, Yannapat; Luanganggoon, Nuchwana; Suwannoi, Paisan

    2013-01-01

    The research purpose was to develop an evaluation model for the internal quality assurance system of the dramatic arts College of Bunditpattanasilpa Institute. The Research and Development method was used as research methodology which was divided into three phases; "developing the model and its guideline", "trying out the actual…

  16. Very High Dose-Rate Radiobiology and Radiation Therapy for Lung Cancer

    DTIC Science & Technology

    2015-02-01

    most dramatic example is stereotactic ablative radiotherapy ( SABR )/ stereotactic body radiation therapy (SBRT), highly focused and accurate...significant motion, thus increasing the precision and accuracy of lung SABR /SBRT. Objective: We propose to develop a new type of RT system for early stage

  17. Research on Assessment Methods for Urban Public Transport Development in China

    PubMed Central

    Zou, Linghong; Guo, Hongwei

    2014-01-01

    In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method. PMID:25530756

  18. Room temperature synthesis and enhanced photocatalytic property of CeO2/ZnO heterostructures

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Fan, Huiqing; Ren, Xiaohu; Fang, Jiawen

    2018-02-01

    To achieve better photocatalytic performance, we proposed a facile solid-state reaction method to produce CeO2/ZnO heterostructures. Ceria and zinc oxide were synthesized simultaneously by thoroughly grinding the mixture of zinc acetate dihydrate, cerium nitrate hexahydrate and sodium hydroxide. The morphology of the as-prepared heterostructures varies dramatically as different amount of ceria was introduced in the composition. The photocatalytic performance of CeO2/ZnO heterojunctions was 4.6 times higher than that of pure ZnO. The enhanced photocatalytic activity could be ascribed to that more electrons and holes could transport to the surface of catalysts and react with the pollution due to the extended light-responsive range, accelerated migration, increased specific surface area and suppressed recombination of photogenerated carriers.

  19. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Directed dynamical influence is more detectable with noise

    PubMed Central

    Jiang, Jun-Jie; Huang, Zi-Gang; Huang, Liang; Liu, Huan; Lai, Ying-Cheng

    2016-01-01

    Successful identification of directed dynamical influence in complex systems is relevant to significant problems of current interest. Traditional methods based on Granger causality and transfer entropy have issues such as difficulty with nonlinearity and large data requirement. Recently a framework based on nonlinear dynamical analysis was proposed to overcome these difficulties. We find, surprisingly, that noise can counterintuitively enhance the detectability of directed dynamical influence. In fact, intentionally injecting a proper amount of asymmetric noise into the available time series has the unexpected benefit of dramatically increasing confidence in ascertaining the directed dynamical influence in the underlying system. This result is established based on both real data and model time series from nonlinear ecosystems. We develop a physical understanding of the beneficial role of noise in enhancing detection of directed dynamical influence. PMID:27066763

  1. Directed dynamical influence is more detectable with noise.

    PubMed

    Jiang, Jun-Jie; Huang, Zi-Gang; Huang, Liang; Liu, Huan; Lai, Ying-Cheng

    2016-04-12

    Successful identification of directed dynamical influence in complex systems is relevant to significant problems of current interest. Traditional methods based on Granger causality and transfer entropy have issues such as difficulty with nonlinearity and large data requirement. Recently a framework based on nonlinear dynamical analysis was proposed to overcome these difficulties. We find, surprisingly, that noise can counterintuitively enhance the detectability of directed dynamical influence. In fact, intentionally injecting a proper amount of asymmetric noise into the available time series has the unexpected benefit of dramatically increasing confidence in ascertaining the directed dynamical influence in the underlying system. This result is established based on both real data and model time series from nonlinear ecosystems. We develop a physical understanding of the beneficial role of noise in enhancing detection of directed dynamical influence.

  2. A modular approach for automated sample preparation and chemical analysis

    NASA Technical Reports Server (NTRS)

    Clark, Michael L.; Turner, Terry D.; Klingler, Kerry M.; Pacetti, Randolph

    1994-01-01

    Changes in international relations, especially within the past several years, have dramatically affected the programmatic thrusts of the U.S. Department of Energy (DOE). The DOE now is addressing the environmental cleanup required as a result of 50 years of nuclear arms research and production. One major obstacle in the remediation of these areas is the chemical determination of potentially contaminated material using currently acceptable practices. Process bottlenecks and exposure to hazardous conditions pose problems for the DOE. One proposed solution is the application of modular automated chemistry using Standard Laboratory Modules (SLM) to perform Standard Analysis Methods (SAM). The Contaminant Analysis Automation (CAA) Program has developed standards and prototype equipment that will accelerate the development of modular chemistry technology and is transferring this technology to private industry.

  3. Minimization of Ohmic Losses for Domain Wall Motion in a Ferromagnetic Nanowire

    NASA Astrophysics Data System (ADS)

    Tretiakov, O. A.; Liu, Y.; Abanov, Ar.

    2010-11-01

    We study current-induced domain-wall motion in a narrow ferromagnetic wire. We propose a way to move domain walls with a resonant time-dependent current which dramatically decreases the Ohmic losses in the wire and allows driving of the domain wall with higher speed without burning the wire. For any domain-wall velocity we find the time dependence of the current needed to minimize the Ohmic losses. Below a critical domain-wall velocity specified by the parameters of the wire the minimal Ohmic losses are achieved by dc current. Furthermore, we identify the wire parameters for which the losses reduction from its dc value is the most dramatic.

  4. Parametric study of sensor placement for vision-based relative navigation system of multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung

    2017-12-01

    In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.

  5. Multiplying Is More than Math--It's Also Good Management

    ERIC Educational Resources Information Center

    Foster, Elise; Wiseman, Liz

    2015-01-01

    Studying more than 400 educational leaders, the authors propose a new model for leadership and management rooted in the belief that there is latent intelligence inside schools and educational organizations. Their findings suggest two dramatically different types of leaders, Multipliers and Diminishers. The five disciplines that distinguish…

  6. Name Changes in Medically Important Fungi and Their Implications for Clinical Practice

    PubMed Central

    de Hoog, G. Sybren; Chaturvedi, Vishnu; Denning, David W.; Dyer, Paul S.; Frisvad, Jens Christian; Geiser, David; Gräser, Yvonne; Guarro, Josep; Haase, Gerhard; Kwon-Chung, Kyung-Joo; Meyer, Wieland; Pitt, John I.; Samson, Robert A.; Tintelnot, Kathrin; Vitale, Roxana G.; Walsh, Thomas J.

    2014-01-01

    Recent changes in the Fungal Code of Nomenclature and developments in molecular phylogeny are about to lead to dramatic changes in the naming of medically important molds and yeasts. In this article, we present a widely supported and simple proposal to prevent unnecessary nomenclatural instability. PMID:25297326

  7. Counterfactual quantum erasure: spooky action without entanglement

    NASA Astrophysics Data System (ADS)

    Salih, Hatim

    2018-02-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.

  8. Comparing Post-Soviet Changes in Higher Education Governance in Kazakhstan, Russia, and Uzbekistan

    ERIC Educational Resources Information Center

    Azimbayeva, Gulzhan

    2017-01-01

    This paper argues that during the "perestroika" period the institutionalised context of the Soviet higher education governance was transformed dramatically, and has attempted to explain the outcomes for higher education from the "perestroika" period and proposed the theory of "institutional dis/continuities". The…

  9. Counterfactual quantum erasure: spooky action without entanglement.

    PubMed

    Salih, Hatim

    2018-02-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice's photon, dramatically restoring interference-without previously shared entanglement, and without Alice's photon ever leaving her laboratory.

  10. Recipients of RTT Aid Struggling

    ERIC Educational Resources Information Center

    McNeil, Michele

    2012-01-01

    The 12 winners of the federal Race to the Top competition have experienced near-universal challenges in turning their sweeping, multifaceted proposals into reality, among them a limited state capacity to execute fast, dramatic change and deeply rooted teacher-evaluation systems that have proved hard to transform. Reports unveiled by the U.S.…

  11. [Comprehensive weighted recognition method for hydrological abrupt change: With the runoff series of Jiajiu hydrological station in Lancang River as an example].

    PubMed

    Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.

  12. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  13. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  14. From Rehearsed Monologue to Spontaneous Acting

    ERIC Educational Resources Information Center

    Niedzielski, Henri

    1972-01-01

    Suggests that the effective prerequisites for teaching methods courses are cheerleading, modern dance, and dramatics. Follows acting methods and mental attitudes of Polish director, Jerzy Grotowski. (DS)

  15. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  16. Sensor-oriented feature usability evaluation in fingerprint segmentation

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yin, Yilong; Yang, Gongping

    2013-06-01

    Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).

  17. Using dynamic programming to improve fiducial marker localization

    NASA Astrophysics Data System (ADS)

    Wan, Hanlin; Ge, Jiajia; Parikh, Parag

    2014-04-01

    Fiducial markers are used in a wide range of medical imaging applications. In radiation therapy, they are often implanted near tumors and used as motion surrogates that are tracked with fluoroscopy. We propose a novel and robust method based on dynamic programming (DP) for retrospectively localizing radiopaque fiducial markers in fluoroscopic images. Our method was compared to template matching (TM) algorithms on 407 data sets from 24 patients. We found that the performance of TM varied dramatically depending on the template used (ranging from 47% to 92% of data sets with a mean error <1 mm). DP by itself requires no template and performed as well as the best TM method, localizing the markers in 91% of the data sets with a mean error <1 mm. Finally, by combining DP and TM, we were able to localize the markers in 99% of the data sets with a mean error <1 mm, regardless of the template used. Our results show that DP can be a powerful tool for analyzing tumor motion, capable of accurately locating fiducial markers in fluoroscopic images regardless of marker type, shape, and size.

  18. Tracking control of time-varying knee exoskeleton disturbed by interaction torque.

    PubMed

    Li, Zhan; Ma, Wenhao; Yin, Ziguang; Guo, Hongliang

    2017-11-01

    Knee exoskeletons have been increasingly applied as assistive devices to help lower-extremity impaired people to make their knee joints move through providing external movement compensation. Tracking control of knee exoskeletons guided by human intentions often encounters time-varying (time-dependent) issues and the disturbance interaction torque, which may dramatically put an influence up on their dynamic behaviors. Inertial and viscous parameters of knee exoskeletons can be estimated to be time-varying due to unexpected mechanical vibrations and contact interactions. Moreover, the interaction torque produced from knee joint of wearers has an evident disturbance effect on regular motions of knee exoskeleton. All of these points can increase difficultly of accurate control of knee exoskeletons to follow desired joint angle trajectories. This paper proposes a novel control strategy for controlling knee exoskeleton with time-varying inertial and viscous coefficients disturbed by interaction torque. Such designed controller is able to make the tracking error of joint angle of knee exoskeletons exponentially converge to zero. Meanwhile, the proposed approach is robust to guarantee the tracking error bounded when the interaction torque exists. Illustrative simulation and experiment results are presented to show efficiency of the proposed controller. Additionally, comparisons with gradient dynamic (GD) approach and other methods are also presented to demonstrate efficiency and superiority of the proposed control strategy for tracking joint angle of knee exoskeleton. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Parsimonious extreme learning machine using recursive orthogonal least squares.

    PubMed

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.

  20. Space-time encoding for high frame rate ultrasound imaging.

    PubMed

    Misaridis, Thanassis X; Jensen, Jørgen A

    2002-05-01

    Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.

  1. Emerging medical informatics research trends detection based on MeSH terms.

    PubMed

    Lyu, Peng-Hui; Yao, Qiang; Mao, Jin; Zhang, Shi-Jing

    2015-01-01

    The aim of this study is to analyze the research trends of medical informatics over the last 12 years. A new method based on MeSH terms was proposed to identify emerging topics and trends of medical informatics research. Informetric methods and visualization technologies were applied to investigate research trends of medical informatics. The metric of perspective factor (PF) embedding MeSH terms was appropriately employed to assess the perspective quality for journals. The emerging MeSH terms have changed dramatically over the last 12 years, identifying two stages of medical informatics: the "medical imaging stage" and the "medical informatics stage". The focus of medical informatics has shifted from acquisition and storage of healthcare data by integrating computational, informational, cognitive and organizational sciences to semantic analysis for problem solving and clinical decision-making. About 30 core journals were determined by Bradford's Law in the last 3 years in this area. These journals, with high PF values, have relative high perspective quality and lead the trend of medical informatics.

  2. Refinement procedure for the image alignment in high-resolution electron tomography.

    PubMed

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing

    PubMed Central

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-01-01

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733

  4. Transferability and inter-laboratory variability assessment of the in vitro bovine oocyte fertilization test.

    PubMed

    Tessaro, Irene; Modina, Silvia C; Crotti, Gabriella; Franciosi, Federica; Colleoni, Silvia; Lodde, Valentina; Galli, Cesare; Lazzari, Giovanna; Luciano, Alberto M

    2015-01-01

    The dramatic increase in the number of animals required for reproductive toxicity testing imposes the validation of alternative methods to reduce the use of laboratory animals. As we previously demonstrated for in vitro maturation test of bovine oocytes, the present study describes the transferability assessment and the inter-laboratory variability of an in vitro test able to identify chemical effects during the process of bovine oocyte fertilization. Eight chemicals with well-known toxic properties (benzo[a]pyrene, busulfan, cadmium chloride, cycloheximide, diethylstilbestrol, ketoconazole, methylacetoacetate, mifepristone/RU-486) were tested in two well-trained laboratories. The statistical analysis demonstrated no differences in the EC50 values for each chemical in within (inter-runs) and in between-laboratory variability of the proposed test. We therefore conclude that the bovine in vitro fertilization test could advance toward the validation process as alternative in vitro method and become part of an integrated testing strategy in order to predict chemical hazards on mammalian fertility. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Skill networks and measures of complex human capital.

    PubMed

    Anderson, Katharine A

    2017-11-28

    We propose a network-based method for measuring worker skills. We illustrate the method using data from an online freelance website. Using the tools of network analysis, we divide skills into endogenous categories based on their relationship with other skills in the market. Workers who specialize in these different areas earn dramatically different wages. We then show that, in this market, network-based measures of human capital provide additional insight into wages beyond traditional measures. In particular, we show that workers with diverse skills earn higher wages than those with more specialized skills. Moreover, we can distinguish between two different types of workers benefiting from skill diversity: jacks-of-all-trades, whose skills can be applied independently on a wide range of jobs, and synergistic workers, whose skills are useful in combination and fill a hole in the labor market. On average, workers whose skills are synergistic earn more than jacks-of-all-trades. Copyright © 2017 the Author(s). Published by PNAS.

  6. Intelligent transient transitions detection of LRE test bed

    NASA Astrophysics Data System (ADS)

    Zhu, Fengyu; Shen, Zhengguang; Wang, Qi

    2013-01-01

    Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.

  7. Nonequilibrium ab initio molecular dynamics determination of Ti monovacancy migration rates in B 1 TiN

    NASA Astrophysics Data System (ADS)

    Gambino, D.; Sangiovanni, D. G.; Alling, B.; Abrikosov, I. A.

    2017-09-01

    We use the color diffusion (CD) algorithm in nonequilibrium (accelerated) ab initio molecular dynamics simulations to determine Ti monovacancy jump frequencies in NaCl-structure titanium nitride (TiN), at temperatures ranging from 2200 to 3000 K. Our results show that the CD method extended beyond the linear-fitting rate-versus-force regime [Sangiovanni et al., Phys. Rev. B 93, 094305 (2016), 10.1103/PhysRevB.93.094305] can efficiently determine metal vacancy migration rates in TiN, despite the low mobilities of lattice defects in this type of ceramic compound. We propose a computational method based on gamma-distribution statistics, which provides unambiguous definition of nonequilibrium and equilibrium (extrapolated) vacancy jump rates with corresponding statistical uncertainties. The acceleration-factor achieved in our implementation of nonequilibrium molecular dynamics increases dramatically for decreasing temperatures from 500 for T close to the melting point Tm, up to 33 000 for T ≈0.7 Tm .

  8. High-frequency stock linkage and multi-dimensional stationary processes

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Bao, Si; Chen, Jingchao

    2017-02-01

    In recent years, China's stock market has experienced dramatic fluctuations; in particular, in the second half of 2014 and 2015, the market rose sharply and fell quickly. Many classical financial phenomena, such as stock plate linkage, appeared repeatedly during this period. In general, these phenomena have usually been studied using daily-level data or minute-level data. Our paper focuses on the linkage phenomenon in Chinese stock 5-second-level data during this extremely volatile period. The method used to select the linkage points and the arbitrage strategy are both based on multi-dimensional stationary processes. A new program method for testing the multi-dimensional stationary process is proposed in our paper, and the detailed program is presented in the paper's appendix. Because of the existence of the stationary process, the strategy's logarithmic cumulative average return will converge under the condition of the strong ergodic theorem, and this ensures the effectiveness of the stocks' linkage points and the more stable statistical arbitrage strategy.

  9. Atomic "bomb testing": the Elitzur-Vaidman experiment violates the Leggett-Garg inequality

    NASA Astrophysics Data System (ADS)

    Robens, Carsten; Alt, Wolfgang; Emary, Clive; Meschede, Dieter; Alberti, Andrea

    2017-01-01

    Elitzur and Vaidman have proposed a measurement scheme that, based on the quantum superposition principle, allows one to detect the presence of an object—in a dramatic scenario, a bomb—without interacting with it. It was pointed out by Ghirardi that this interaction-free measurement scheme can be put in direct relation with falsification tests of the macro-realistic worldview. Here we have implemented the "bomb test" with a single atom trapped in a spin-dependent optical lattice to show explicitly a violation of the Leggett-Garg inequality—a quantitative criterion fulfilled by macro-realistic physical theories. To perform interaction-free measurements, we have implemented a novel measurement method that correlates spin and position of the atom. This method, which quantum mechanically entangles spin and position, finds general application for spin measurements, thereby avoiding the shortcomings inherent in the widely used push-out technique. Allowing decoherence to dominate the evolution of our system causes a transition from quantum to classical behavior in fulfillment of the Leggett-Garg inequality.

  10. Micro gas analyzer measurement of nitric oxide in breath by direct wet scrubbing and fluorescence detection.

    PubMed

    Toda, Kei; Koga, Takahiro; Kosuge, Junichi; Kashiwagi, Mieko; Oguchi, Hiroshi; Arimoto, Takemi

    2009-08-15

    A novel method is proposed to measure NO in breath. Breath NO is a useful diagnostic measure for asthma patients. Due to the low water solubility of NO, existing wet chemical NO measurements are conducted on NO(2) after removal of pre-existing NO(2) and conversion of NO to NO(2). In contrast, this study utilizes direct measurement of NO by wet chemistry. Gaseous NO was collected into an aqueous phase by a honeycomb-patterned microchannel scrubber and reacted with diaminofluorescein-2 (DAF-2). Fluorescence of the product was measured using a miniature detector, comprising a blue light-emitting diode (LED) and a photodiode. The response intensity was found to dramatically increase following addition of NO(2) into the absorbing solution or air sample. By optimizing the conditions, the sensitivity obtained was sufficient to measure parts per billion by volume levels of NO continuously. The system was applied to real analysis of NO in breath, and the effect of coexisting compounds was investigated. The proposed system could successfully measure breath NO.

  11. Indoor-Outdoor Detection Using a Smart Phone Sensor.

    PubMed

    Wang, Weiping; Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei

    2016-09-22

    In the era of mobile internet, Location Based Services (LBS) have developed dramatically. Seamless Indoor and Outdoor Navigation and Localization (SNAL) has attracted a lot of attention. No single positioning technology was capable of meeting the various positioning requirements in different environments. Selecting different positioning techniques for different environments is an alternative method. Detecting the users' current environment is crucial for this technique. In this paper, we proposed to detect the indoor/outdoor environment automatically without high energy consumption. The basic idea was simple: we applied a machine learning algorithm to classify the neighboring Global System for Mobile (GSM) communication cellular base station's signal strength in different environments, and identified the users' current context by signal pattern recognition. We tested the algorithm in four different environments. The results showed that the proposed algorithm was capable of identifying open outdoors, semi-outdoors, light indoors and deep indoors environments with 100% accuracy using the signal strength of four nearby GSM stations. The required hardware and signal are widely available in our daily lives, implying its high compatibility and availability.

  12. Using Holland's Theory To Analyze Labor Market Data.

    ERIC Educational Resources Information Center

    Reed, Corey

    Some career theorists and other professionals have warned about dramatic shifts in the economy and implications for changes needed in career services and theory. This paper proposes that Hollands person-environment model (1997) is a valuable frame of reference for career counselors and instructors to use with clients and students in interpreting…

  13. A Blueprint for Reform. The Reauthorization of the Elementary and Secondary Education Act

    ERIC Educational Resources Information Center

    US Department of Education, 2010

    2010-01-01

    On Saturday, March 13, the Obama administration released its blueprint for revising the Elementary and Secondary Education Act (ESEA), which would ask states to adopt college- and career-ready standards and reward schools for producing dramatic gains in student achievement. The proposal challenges the nation to embrace educational standards that…

  14. Obama Pressing Boost for Pre-K

    ERIC Educational Resources Information Center

    Klein, Alyson

    2013-01-01

    President Barack Obama used his first State of the Union address since winning re-election to put education at the center of his broader strategy to bolster the nation's economic prospects. He is proposing to dramatically expand preschool access for low- and middle-income children and to create a new competitive program aimed at helping high…

  15. Counterfactual quantum erasure: spooky action without entanglement

    PubMed Central

    2018-01-01

    We combine the eyebrow-raising quantum phenomena of erasure and counterfactuality for the first time, proposing a simple yet unusual quantum eraser: A distant Bob can decide to erase which-path information from Alice’s photon, dramatically restoring interference—without previously shared entanglement, and without Alice’s photon ever leaving her laboratory. PMID:29515845

  16. Student Organizations in Canada and Quebec's "Maple Spring"

    ERIC Educational Resources Information Center

    Bégin-Caouette, Olivier; Jones, Glen A.

    2014-01-01

    This article has two major objectives: to describe the structure of the student movement in Canada and the formal role of students in higher education governance, and to describe and analyze the "Maple Spring," the dramatic mobilization of students in opposition to proposed tuition fee increases in Quebec that eventually led to a…

  17. Evaluating Online Bilingual Dictionaries: The Case of Popular Free English-Polish Dictionaries

    ERIC Educational Resources Information Center

    Lew, Robert; Szarowska, Agnieszka

    2017-01-01

    Language learners today exhibit a strong preference for free online resources. One problem with such resources is that their quality can vary dramatically. Building on related work on monolingual resources for English, we propose an evaluation framework for online bilingual dictionaries, designed to assess lexicographic quality in four major…

  18. Adaptive E-Learning Environments: Research Dimensions and Technological Approaches

    ERIC Educational Resources Information Center

    Di Bitonto, Pierpaolo; Roselli, Teresa; Rossano, Veronica; Sinatra, Maria

    2013-01-01

    One of the most closely investigated topics in e-learning research has always been the effectiveness of adaptive learning environments. The technological evolutions that have dramatically changed the educational world in the last six decades have allowed ever more advanced and smarter solutions to be proposed. The focus of this paper is to depict…

  19. Cultivating Revolutionary Educational Leaders: Translating Emerging Theories into Action

    ERIC Educational Resources Information Center

    Kezar, Adrianna; Carducci, Rozana

    2007-01-01

    In this article, the authors describe the way that leadership research has dramatically changed over the last 30 years and how leadership development programs have not kept pace with some of the important changes. In particular, the authors propose that five revolutionary leadership concepts have been overlooked and should be included in future…

  20. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  1. Latex Micro-balloon Pumping in Centrifugal Microfluidic Platforms

    PubMed Central

    Aeinehvand, Mohammad Mahdi; Ibrahim, Fatimah; Al-Faqheri, Wisam; Thio, Tzer Hwai Gilbert; Kazemzadeh, Amin; Wadi harun, Sulaiman; Madou, Marc

    2014-01-01

    Centrifugal microfluidic platforms have emerged as point-of-care diagnostic tools. However, the unidirectional nature of the centrifugal force limits the available space for multi-stepped processes on a single microfluidics disc. To overcome this limitation, a passive pneumatic pumping method actuated at high rotational speeds has been previously proposed to pump liquid against the centrifugal force. In this paper, a novel micro-balloon pumping method that relies on elastic energy stored in a latex membrane is introduced. It operates at low rotational speeds and pumps a larger volume of liquid towards the centre of the disc. Two different micro-balloon pumping designs have been developed to study the pump performance and capacity at a range of rotational frequencies from 0 to 1500 rpm. The behaviour of the micro-balloon pump on the centrifugal microfluidic platforms has been theoretically analysed and compared with the experimental data. The experimental data shows that, the developed pumping method dramatically decreases the required rotational speed to pump liquid compared to the previously developed pneumatic pumping methods. It also shows that within a range of rotational speed, desirable volume of liquid can be stored and pumped by adjusting the size of the micro-balloon. PMID:24441792

  2. What Would Block Grants or Limits on Per Capita Spending Mean for Medicaid?

    PubMed

    Rosenbaum, Sara; Schmucker, Sara; Rothenberg, Sara; Gunsalus, Rachel

    2016-11-01

    Issue: President-elect Trump and some in Congress have called for establishing absolute limits on the federal government’s spending on Medicaid, not only for the population covered through the Affordable Care Act’s eligibility expansion but for the program overall. Such a change would effectively reverse a 50-year trend of expanding Medicaid in order to protect the most vulnerable Americans. Goal: To explore the two most common proposals for reengineering federal funding of Medicaid: block grants that set limits on total annual spending regardless of enrollment, and caps that limit average spending per enrollee. Methods: Review of existing policy proposals and other documents. Key findings and conclusions: Current proposals for dramatically reducing federal spending on Medicaid would achieve this goal by creating fixed-funding formulas divorced from the actual costs of providing care. As such, they would create funding gaps for states to either absorb or, more likely, offset through new limits placed on their programs. As a result, block-granting Medicaid or instituting "per capita caps" would most likely reduce the number of Americans eligible for Medicaid and narrow coverage for remaining enrollees. The latter approach would, however, allow for population growth, though its desirability to the new president and Congress is unclear. The full extent of funding and benefit reductions is as yet unknown.

  3. Methods and approaches in the topology-based analysis of biological pathways

    PubMed Central

    Mitrea, Cristina; Taghavi, Zeinab; Bokanizad, Behzad; Hanoudi, Samer; Tagett, Rebecca; Donato, Michele; Voichiţa, Călin; Drăghici, Sorin

    2013-01-01

    The goal of pathway analysis is to identify the pathways significantly impacted in a given phenotype. Many current methods are based on algorithms that consider pathways as simple gene lists, dramatically under-utilizing the knowledge that such pathways are meant to capture. During the past few years, a plethora of methods claiming to incorporate various aspects of the pathway topology have been proposed. These topology-based methods, sometimes referred to as “third generation,” have the potential to better model the phenomena described by pathways. Although there is now a large variety of approaches used for this purpose, no review is currently available to offer guidance for potential users and developers. This review covers 22 such topology-based pathway analysis methods published in the last decade. We compare these methods based on: type of pathways analyzed (e.g., signaling or metabolic), input (subset of genes, all genes, fold changes, gene p-values, etc.), mathematical models, pathway scoring approaches, output (one or more pathway scores, p-values, etc.) and implementation (web-based, standalone, etc.). We identify and discuss challenges, arising both in methodology and in pathway representation, including inconsistent terminology, different data formats, lack of meaningful benchmarks, and the lack of tissue and condition specificity. PMID:24133454

  4. Minimization of Ohmic losses for domain wall motion in ferromagnetic nanowires

    NASA Astrophysics Data System (ADS)

    Abanov, Artem; Tretiakov, Oleg; Liu, Yang

    2011-03-01

    We study current-induced domain-wall motion in a narrow ferromagnetic wire. We propose a way to move domain walls with a resonant time-dependent current which dramatically decreases the Ohmic losses in the wire and allows driving of the domain wall with higher speed without burning the wire. For any domain wall velocity we find the time-dependence of the current needed to minimize the Ohmic losses. Below a critical domain-wall velocity specified by the parameters of the wire the minimal Ohmic losses are achieved by dc current. Furthermore, we identify the wire parameters for which the losses reduction from its dc value is the most dramatic. This work was supported by the NSF Grant No. 0757992 and Welch Foundation (A-1678).

  5. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  6. DEVELOPMENT OF AG-1 SECTION FI ON METAL MEDIA FILTERS - 9061

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adamson, D; Charles A. Waggoner, C

    Development of a metal media standard (FI) for ASME AG-1 (Code on Nuclear Air and Gas Treatment) has been under way for almost ten years. This paper will provide a brief history of the development process of this section and a detailed overview of its current content/status. There have been at least two points when dramatic changes have been made in the scope of the document due to feedback from the full Committee on Nuclear Air and Gas Treatment (CONAGT). Development of the proposed section has required resolving several difficult issues associated with scope; namely, filtering efficiency, operating conditions (mediamore » velocity, pressure drop, etc.), qualification testing, and quality control/acceptance testing. A proposed version of Section FI is currently undergoing final revisions prior to being submitted for balloting. The section covers metal media filters of filtering efficiencies ranging from medium (less than 99.97%) to high (99.97% and greater). Two different types of high efficiency filters are addressed; those units intended to be a direct replacement of Section FC fibrous glass HEPA filters and those that will be placed into newly designed systems capable of supporting greater static pressures and differential pressures across the filter elements. Direct replacements of FC HEPA filters in existing systems will be required to meet equivalent qualification and testing requirements to those contained in Section FC. A series of qualification and quality assurance test methods have been identified for the range of filtering efficiencies covered by this proposed standard. Performance characteristics of sintered metal powder vs. sintered metal fiber media are dramatically different with respect to parameters like differential pressures and rigidity of the media. Wide latitude will be allowed for owner specification of performance criteria for filtration units that will be placed into newly designed systems. Such allowances will permit use of the most appropriate metal media for a system as specified by the owner with respect to material of manufacture, media velocity, system maximum static pressure, maximum differential pressure across the filter, and similar parameters.« less

  7. Determination of the top quark mass from leptonic observables

    NASA Astrophysics Data System (ADS)

    Frixione, Stefano; Mitov, Alexander

    2014-09-01

    We present a procedure for the determination of the mass of the top quark at the LHC based on leptonic observables in dilepton events. Our approach utilises the shapes of kinematic distributions through their few lowest Mellin moments; it is notable for its minimal sensitivity to the modelling of long-distance effects, for not requiring the reconstruction of top quarks, and for having a competitive precision, with theory errors on the extracted top mass of the order of 0.8 GeV. A novel aspect of our work is the study of theoretical biases that might influence in a dramatic way the determination of the top mass, and which are potentially relevant to all template-based methods. We propose a comprehensive strategy that helps minimise the impact of such biases, and leads to a reliable top mass extraction at hadron colliders.

  8. Electronically Transparent Au-N Bonds for Molecular Junctions.

    PubMed

    Zang, Yaping; Pinkard, Andrew; Liu, Zhen-Fei; Neaton, Jeffrey B; Steigerwald, Michael L; Roy, Xavier; Venkataraman, Latha

    2017-10-25

    We report a series of single-molecule transport measurements carried out in an ionic environment with oligophenylenediamine wires. These molecules exhibit three discrete conducting states accessed by electrochemically modifying the contacts. Transport in these junctions is defined by the oligophenylene backbone, but the conductance is increased by factors of ∼20 and ∼400 when compared to traditional dative junctions. We propose that the higher-conducting states arise from in situ electrochemical conversion of the dative Au←N bond into a new type of Au-N contact. Density functional theory-based transport calculations establish that the new contacts dramatically increase the electronic coupling of the oligophenylene backbone to the Au electrodes, consistent with experimental transport data. The resulting contact resistance is the lowest reported to date; more generally, our work demonstrates a facile method for creating electronically transparent metal-organic interfaces.

  9. Enhancement in photoluminescence performance of carbon-decorated T-ZnO

    NASA Astrophysics Data System (ADS)

    Jian, Xian; Chen, Guozhang; Wang, Chao; Yin, Liangjun; Li, Gang; Yang, Ping; Chen, Lei; Xu, Bao; Gao, Yang; Feng, Yanyu; Tang, Hui; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Cao, Yu; Wang, Siyuan; Gao, Xin

    2015-03-01

    The facile preparation of ZnO possessing high visible luminescence intensity remains challenging due to an unclear luminescence mechanism. Here, two basic approaches are proposed to enhance the luminescent intensity based on the theoretical analysis over surface defects. Based on the deduction, we introduce a methodology for obtaining hybrid tetrapod-like zinc oxide (T-ZnO), decorated by carbon nanomarterials on T-ZnO surfaces through the catalytic chemical vapor deposition approach. The intensity of the T-ZnO green emission can be modulated by topography and the proportion of carbon. Under proper experiment conditions, the carbon decorating leads to dramatically enhanced luminescence intensity of T-ZnO from 400 to 700 nm compared with no carbon decorated, which elevates this approach to a simple and effective method for the betterment of fluorescent materials in practical applications.

  10. Drowning-out crystallisation of sodium sulphate using aqueous two-phase systems.

    PubMed

    Taboada, M E; Graber, T A; Asenjo, J A; Andrews, B A

    2000-06-23

    A novel method to obtain crystals of pure, anhydrous salt, using aqueous two-phase systems was studied. A concentrated salt solution is mixed with polyethylene glycol (PEG), upon which three phases are formed: salt crystals, a PEG-rich liquid and a salt-rich liquid. After removal of the solid salt, a two-phase system is obtained. Both liquid phases are recycled, allowing the design of a continuous process, which could be exploited industrially. The phase diagram of the system water-Na2SO4-PEG 3350 at 28 degrees C was used. Several process alternatives are proposed and their economic potential is discussed. The process steps needed to produce sodium sulphate crystals include mixing, crystallisation, settling and, optionally, evaporation of water. The yield of sodium sulphate increases dramatically if an evaporation step is used.

  11. Adaptive estimation of hand movement trajectory in an EEG based brain-computer interface system

    NASA Astrophysics Data System (ADS)

    Robinson, Neethu; Guan, Cuntai; Vinod, A. P.

    2015-12-01

    Objective. The various parameters that define a hand movement such as its trajectory, speed, etc, are encoded in distinct brain activities. Decoding this information from neurophysiological recordings is a less explored area of brain-computer interface (BCI) research. Applying non-invasive recordings such as electroencephalography (EEG) for decoding makes the problem more challenging, as the encoding is assumed to be deep within the brain and not easily accessible by scalp recordings. Approach. EEG based BCI systems can be developed to identify the neural features underlying movement parameters that can be further utilized to provide a detailed and well defined control command set to a BCI output device. A real-time continuous control is better suited for practical BCI systems, and can be achieved by continuous adaptive reconstruction of movement trajectory than discrete brain activity classifications. In this work, we adaptively reconstruct/estimate the parameters of two-dimensional hand movement trajectory, namely movement speed and position, from multi-channel EEG recordings. The data for analysis is collected by performing an experiment that involved center-out right-hand movement tasks in four different directions at two different speeds in random order. We estimate movement trajectory using a Kalman filter that models the relation between brain activity and recorded parameters based on a set of defined predictors. We propose a method to define these predictor variables that includes spatial, spectral and temporally localized neural information and to select optimally informative variables. Main results. The proposed method yielded correlation of (0.60 ± 0.07) between recorded and estimated data. Further, incorporating the proposed predictor subset selection, the correlation achieved is (0.57 ± 0.07, p {\\lt }0.004) with significant gain in stability of the system, as well as dramatic reduction in number of predictors (76%) for the savings of computational time. Significance. The proposed system provides a real time movement control system using EEG-BCI with control over movement speed and position. These results are higher and statistically significant compared to existing techniques in EEG based systems and thus promise the applicability of the proposed method for efficient estimation of movement parameters and for continuous motor control.

  12. Modeling Drinking Behavior Progression in Youth: a Non-identified Probability Discrete Event System Using Cross-sectional Data

    PubMed Central

    Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka

    2016-01-01

    Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344

  13. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  14. Robust interferometry against imperfections based on weak value amplification

    NASA Astrophysics Data System (ADS)

    Fang, Chen; Huang, Jing-Zheng; Zeng, Guihua

    2018-06-01

    Optical interferometry has been widely used in various high-precision applications. Usually, the minimum precision of an interferometry is limited by various technical noises in practice. To suppress such kinds of noises, we propose a scheme which combines the weak measurement with the standard interferometry. The proposed scheme dramatically outperforms the standard interferometry in the signal-to-noise ratio and the robustness against noises caused by the optical elements' reflections and the offset fluctuation between two paths. A proof-of-principle experiment is demonstrated to validate the amplification theory.

  15. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries

    PubMed Central

    Fan, Shou-Zen; Abbod, Maysam F.

    2018-01-01

    Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries. PMID:29844970

  16. Domain adaptation via transfer component analysis.

    PubMed

    Pan, Sinno Jialin; Tsang, Ivor W; Kwok, James T; Yang, Qiang

    2011-02-01

    Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.

  17. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  18. Dramatics and the Teaching of Literature. NCTE/ERIC Studies in the Teaching of English.

    ERIC Educational Resources Information Center

    Hoetker, James

    This review of current uses of drama in the teaching of literature deals with drama that is "concerned with experience by the participants, irrespective of any function of communication to an audience." Chapters are devoted to (1) the British-influenced Dartmouth Seminar proposals emphasizing drama and oral language, (2) American…

  19. 76 FR 74088 - Self-Regulatory Organizations; Chicago Stock Exchange, Incorporated; Notice of Filing Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-30

    ... rights and warrants are affected by the price of the underlying stock as well as other factors, particularly the volatility of the stock. As a consequence, the prices of rights and warrants may move more dramatically than the prices of the underlying stocks even when the rights and warrants (and the underlying...

  20. Understanding Interactive CD-ROM Storybooks and Their Functions in Reading Comprehension: A Critical Review

    ERIC Educational Resources Information Center

    Ertem, Ihsan Seyit

    2011-01-01

    With dramatic changes and recent advances in multimedia, digital technologies through computers propose new ways for introducing kids to the literacy. Literacy educators have stated that traditional printed books are not sufficient and electronic books have the potential to change reading skills. As a valuable tool in educational settings new and…

  1. Organizing Schools to Improve Student Achievement: Start Times, Grade Configurations, and Teacher Assignments. Discussion Paper 2011-08

    ERIC Educational Resources Information Center

    Jacob, Brian A.; Rockoff, Jonah E.

    2011-01-01

    Education reform proposals are often based on high-profile or dramatic policy changes, many of which are expensive, politically controversial, or both. In this paper, we argue that the debates over these "flashy" policies have obscured a potentially important direction for raising student performance--namely, reforms to the management or…

  2. Leading the Newly Consolidated High School: Exciting Opportunity or Overwhelming Challenge?

    ERIC Educational Resources Information Center

    Thurman, Lance E.; Hackmann, Donald G.

    2015-01-01

    In the current economic times, school personnel are regularly challenged to reduce the costs of operating the nation's school systems. School district consolidations often are proposed as a mechanism to realize fiscal savings for local communities; indeed, the number of U.S. school districts has declined dramatically over the past 70 years,…

  3. Television News as Drama: An International Perspective.

    ERIC Educational Resources Information Center

    Breen, Myles P.

    An observer can identify a trend in television news presentation style toward the dramatic, not only in the sets and the personnel, but more importantly in the choice of what is deemed newsworthy. A thesis is proposed by many suggesting that television is a regular ritual of many viewers of which news is a minor part and that television's first…

  4. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    NASA Astrophysics Data System (ADS)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  5. Dark-field phase retrieval under the constraint of the Friedel symmetry in coherent X-ray diffraction imaging.

    PubMed

    Kobayashi, Amane; Sekiguchi, Yuki; Takayama, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2014-11-17

    Coherent X-ray diffraction imaging (CXDI) is a lensless imaging technique that is suitable for visualizing the structures of non-crystalline particles with micrometer to sub-micrometer dimensions from material science and biology. One of the difficulties inherent to CXDI structural analyses is the reconstruction of electron density maps of specimen particles from diffraction patterns because saturated detector pixels and a beam stopper result in missing data in small-angle regions. To overcome this difficulty, the dark-field phase-retrieval (DFPR) method has been proposed. The DFPR method reconstructs electron density maps from diffraction data, which are modified by multiplying Gaussian masks with an observed diffraction pattern in the high-angle regions. In this paper, we incorporated Friedel centrosymmetry for diffraction patterns into the DFPR method to provide a constraint for the phase-retrieval calculation. A set of model simulations demonstrated that this constraint dramatically improved the probability of reconstructing correct electron density maps from diffraction patterns that were missing data in the small-angle region. In addition, the DFPR method with the constraint was applied successfully to experimentally obtained diffraction patterns with significant quantities of missing data. We also discuss this method's limitations with respect to the level of Poisson noise in X-ray detection.

  6. Erosion risk analysis by GIS in environmental impact assessments: a case study--Seyhan Köprü Dam construction.

    PubMed

    Sahin, S; Kurum, E

    2002-11-01

    Environmental Impact Assessment (EIA) is a systematically constructed procedure whereby environmental impacts caused by proposed projects are examined. Geographical Information Systems (GIS) are crucially efficient tools for impact assessment and their use is likely to dramatically increase in the near future. GIS have been applied to a wide range of different impact assessment projects and dams among them have been taken as the case work in this article. EIA Regulation in force in Turkey requires the analysis of steering natural processes that can be adversely affected by the proposed project, particularly in the section of the analysis of the areas with higher landscape value. At this point, the true potential value of GIS lies in its ability to analyze spatial data with accuracy. This study is an attempt to analyze by GIS the areas with higher landscape value in the impact assessment of dam constructions in the case of Seyhan-Köprü Hydroelectric Dam project proposal. A method needs to be defined before the overlapping step by GIS to analyze the areas with higher landscape value. In the case of Seyhan-Köprü Hydroelectric Dam project proposal of the present work, considering the geological conditions and the steep slopes of the area and the type of the project, the most important natural process is erosion. Therefore, the areas of higher erosion risk were considered as the Areas with Higher Landscape Value from the conservation demands points of view.

  7. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods

    PubMed Central

    Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian

    2006-01-01

    Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131

  8. Development and validation of an ultra-performance convergence chromatography method for the quality control of Angelica gigas Nakai.

    PubMed

    Kim, Hyo Seon; Chun, Jin Mi; Kwon, Bo-In; Lee, A-Reum; Kim, Ho Kyoung; Lee, A Yeong

    2016-10-01

    Ultra-performance convergence chromatography, which integrates the advantages of supercritical fluid chromatography and ultra high performance liquid chromatography technologies, is an environmentally friendly analytical method that uses dramatically reduced amounts of organic solvents. An ultra-performance convergence chromatography method was developed and validated for the quantification of decursinol angelate and decursin in Angelica gigas using a CSH Fluoro-Phenyl column (2.1 mm × 150 mm, 1.7 μm) with a run time of 4 min. The method had an improved resolution and a shorter analysis time in comparison to the conventional high-performance liquid chromatography method. This method was validated in terms of linearity, precision, and accuracy. The limits of detection were 0.005 and 0.004 μg/mL for decursinol angelate and decursin, respectively, while the limits of quantitation were 0.014 and 0.012 μg/mL, respectively. The two components showed good regression (correlation coefficient (r 2 ) > 0.999), excellent precision (RSD < 2.28%), and acceptable recoveries (99.75-102.62%). The proposed method can be used to efficiently separate, characterize, and quantify decursinol angelate and decursin in Angelica gigas and its related medicinal materials or preparations, with the advantages of a shorter analysis time, greater sensitivity, and better environmental compatibility. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Care initiation area yields dramatic results.

    PubMed

    2009-03-01

    The ED at Gaston Memorial Hospital in Gastonia, NC, has achieved dramatic results in key department metrics with a Care Initiation Area (CIA) and a physician in triage. Here's how the ED arrived at this winning solution: Leadership was trained in and implemented the Kaizen method, which eliminates redundant or inefficient process steps. Simulation software helped determine additional space needed by analyzing arrival patterns and other key data. After only two days of meetings, new ideas were implemented and tested.

  10. Optimization of the resources management in fighting wildfires.

    PubMed

    Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J Manuel

    2002-09-01

    Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.

  11. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  12. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    NASA Astrophysics Data System (ADS)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  13. Optimization of the Resources Management in Fighting Wildfires

    NASA Astrophysics Data System (ADS)

    Martin-Fernández, Susana; Martínez-Falero, Eugenio; Pérez-González, J. Manuel

    2002-09-01

    Wildfires lead to important economic, social, and environmental losses, especially in areas of Mediterranean climate where they are of a high intensity and frequency. Over the past 30 years there has been a dramatic surge in the development and use of fire spread models. However, given the chaotic nature of environmental systems, it is very difficult to develop real-time fire-extinguishing models. This article proposes a method of optimizing the performance of wildfire fighting resources such that losses are kept to a minimum. The optimization procedure includes discrete simulation algorithms and Bayesian optimization methods for discrete and continuous problems (simulated annealing and Bayesian global optimization). Fast calculus algorithms are applied to provide optimization outcomes in short periods of time such that the predictions of the model and the real behavior of the fire, combat resources, and meteorological conditions are similar. In addition, adaptive algorithms take into account the chaotic behavior of wildfire so that the system can be updated with data corresponding to the real situation to obtain a new optimum solution. The application of this method to the Northwest Forest of Madrid (Spain) is also described. This application allowed us to check that it is a helpful tool in the decision-making process.

  14. A data acquisition protocol for a reactive wireless sensor network monitoring application.

    PubMed

    Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K

    2015-04-30

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.

  15. A Data Acquisition Protocol for a Reactive Wireless Sensor Network Monitoring Application

    PubMed Central

    Aderohunmu, Femi A.; Brunelli, Davide; Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain. PMID:25942642

  16. Federal Investment in Charter Schools: A Proposal for Reauthorizing the Elementary and Secondary Education Act

    ERIC Educational Resources Information Center

    Lazarin, Melissa

    2011-01-01

    The charter school landscape is dramatically different today compared to when the federal government first forayed into the field in 1994. That year it established the Charter School Program as part of the Elementary and Secondary Education Act, or ESEA. The Charter School Program, which is designed to support the startup of new public charter…

  17. 75 FR 51126 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... accordance with Rule 6.24. \\6\\ The term OTP refers to an Options Trading Permit issued by the Exchange for... Ex-by-Ex threshold, coupled with the dramatic increase in option trading volume from 2003 to 2009... and the increase in trading, the existing deadline for submitting CEAs to the Exchange is problematic...

  18. Charting the Course: The AFT's Education Agenda To Reach All Children

    ERIC Educational Resources Information Center

    American Federation of Teachers, 2007

    2007-01-01

    As the pressure grows for students to learn and know more, so grows the demand on schools to raise achievement. It is a huge challenge for the country---and for the schools in which teachers work. The public appetite for dramatic solutions is substantial. That appetite is being fed by a stream of unproven reform proposals that will do tremendous…

  19. Ecological lessons from an island that has experienced it all

    Treesearch

    Ariel E. Lugo

    2006-01-01

    Puerto Rico has gone through significant land cover change, going from an island that was forested to one that was agrarian and then finally becoming an urban island. Coastal wetlands experienced dramatic changes as alterations occurred in land cover and an eco-systematic analysis of these changes leads to the proposal of the following five ecological lessons learned...

  20. Gender Will Find a Way: Exploring How Male Elementary Teachers Make Sense of Their Experiences and Responsibilities

    ERIC Educational Resources Information Center

    Ashcraft, Catherine; Sevier, Brian

    2006-01-01

    In recent years, public discussions over the socialization of boys have increased dramatically. These concerns have fueled a number of proposed remedies, one of which has been a push to increase the presence of men in elementary schools. To date, however, this call for increased male participation in elementary education has focused primarily on…

  1. The Performance of Children with Mental Health Disorders on the ADOS-G: A Question of Diagnostic Utility

    ERIC Educational Resources Information Center

    Sikora, Darryn M.; Hartley, Sigan L.; McCoy, Robin; Gerrard-Morris, Aimee E.; Dill, Kameron

    2008-01-01

    Over the past few decades, the reported number of children identified as having one of the Autism Spectrum Disorders (ASD) has increased exponentially. One proposed reason for the dramatic increase in the prevalence of ASD is diagnostic substitution, whereby children with other disorders incorrectly receive a diagnosis of ASD. Little research has…

  2. Immersed boundary lattice Boltzmann model based on multiple relaxation times

    NASA Astrophysics Data System (ADS)

    Lu, Jianhua; Han, Haifeng; Shi, Baochang; Guo, Zhaoli

    2012-01-01

    As an alterative version of the lattice Boltzmann models, the multiple relaxation time (MRT) lattice Boltzmann model introduces much less numerical boundary slip than the single relaxation time (SRT) lattice Boltzmann model if some special relationship between the relaxation time parameters is chosen. On the other hand, most current versions of the immersed boundary lattice Boltzmann method, which was first introduced by Feng and improved by many other authors, suffer from numerical boundary slip as has been investigated by Le and Zhang. To reduce such a numerical boundary slip, an immerse boundary lattice Boltzmann model based on multiple relaxation times is proposed in this paper. A special formula is given between two relaxation time parameters in the model. A rigorous analysis and the numerical experiments carried out show that the numerical boundary slip reduces dramatically by using the present model compared to the single-relaxation-time-based model.

  3. A novel approach for assessments of erythrocyte sedimentation rate.

    PubMed

    Pribush, A; Hatskelzon, L; Meyerstein, N

    2011-06-01

    Previous studies have shown that the dispersed phase of sedimenting blood undergoes dramatic structural changes: Discrete red blood cell (RBC) aggregates formed shortly after a settling tube is filled with blood are combined into a continuous network followed by its collapse via the formation of plasma channels, and finally, the collapsed network is dispersed into individual fragments. Based on this scheme of structural transformation, a novel approach for assessments of erythrocyte sedimentation is suggested. Information about erythrocyte sedimentation is extracted from time records of the blood conductivity measured after a dispersion of RBC network into individual fragments. It was found that the sedimentation velocity of RBC network fragments correlates positively with the intensity of attractive intercellular interactions, whereas no effect of hematocrit (Hct) was observed. Thus, unlike Westergren erythrocyte sedimentation rate, sedimentation data obtained by the proposed method do not require correction for Hct. © 2010 Blackwell Publishing Ltd.

  4. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  5. Acoustic Sensing Based on Density Shift of Microspheres by Surface Binding of Gold Nanoparticles.

    PubMed

    Miyagawa, Akihisa; Inoue, Yoshinori; Harada, Makoto; Okada, Tetsuo

    2017-01-01

    Herein, we propose a concept for sensing based on density changes of microparticles (MPs) caused by a biochemical reaction. The MPs are levitated by a combined acoustic-gravitational force at a position determined by the density and compressibility. Importantly, the levitation is independent of the MPs sizes. When gold nanoparticles (AuNPs) are bound on the surface of polymer MPs through a reaction, the density of the MPs dramatically increases, and their levitation position in the acoustic-gravitational field is lowered. Because the shift of the levitation position is proportional to the number of AuNPs bound on one MP, we can determine the number of molecules involved in the reaction. The avidin-biotin reaction is used to demonstrate the effectiveness of this concept. The number of molecules involved in the reaction is very small because the reaction space is small for an MP; thus, the method has potential for highly sensitive detection.

  6. Genomic prediction unifies animal and plant breeding programs to form platforms for biological discovery.

    PubMed

    Hickey, John M; Chiurugwi, Tinashe; Mackay, Ian; Powell, Wayne

    2017-08-30

    The rate of annual yield increases for major staple crops must more than double relative to current levels in order to feed a predicted global population of 9 billion by 2050. Controlled hybridization and selective breeding have been used for centuries to adapt plant and animal species for human use. However, achieving higher, sustainable rates of improvement in yields in various species will require renewed genetic interventions and dramatic improvement of agricultural practices. Genomic prediction of breeding values has the potential to improve selection, reduce costs and provide a platform that unifies breeding approaches, biological discovery, and tools and methods. Here we compare and contrast some animal and plant breeding approaches to make a case for bringing the two together through the application of genomic selection. We propose a strategy for the use of genomic selection as a unifying approach to deliver innovative 'step changes' in the rate of genetic gain at scale.

  7. Low-frequency noise effect on terahertz tomography using thermal detectors.

    PubMed

    Guillet, J P; Recur, B; Balacey, H; Bou Sleiman, J; Darracq, F; Lewis, D; Mounaix, P

    2015-08-01

    In this paper, the impact of low-frequency noise on terahertz-computed tomography (THz-CT) is analyzed for several measurement configurations and pyroelectric detectors. We acquire real noise data from a continuous millimeter-wave tomographic scanner in order to figure out its impact on reconstructed images. Second, noise characteristics are quantified according to two distinct acquisition methods by (i) extrapolating from experimental acquisitions a sinogram for different noise backgrounds and (ii) reconstructing the corresponding spatial distributions in a slice using a CT reconstruction algorithm. Then we describe the low-frequency noise fingerprint and its influence on reconstructed images. Thanks to the observations, we demonstrate that some experimental choices can dramatically affect the 3D rendering of reconstructions. Thus, we propose some experimental methodologies optimizing the resulting quality and accuracy of the 3D reconstructions, with respect to the low-frequency noise characteristics observed during acquisitions.

  8. Segmentation and classification of cell cycle phases in fluorescence imaging.

    PubMed

    Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan

    2009-01-01

    Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

  9. Design and analysis of morphing wing based on SMP composite

    NASA Astrophysics Data System (ADS)

    Yu, Kai; Yin, Weilong; Sun, Shouhua; Liu, Yanju; Leng, Jinsong

    2009-03-01

    A new concept of a morphing wing based on shape memory polymer (SMP) and its reinforced composites is proposed in this paper. SMP used in this study is a thermoset styrene-based resin in contrast to normal thermoplastic SMP. During heating, the wing curled on the aircraft can be deployed, providing main lift for a morphing aircraft to realize the stable flight. Aerodynamic characteristics of the deployed morphing wing are calculated by using CFD software. The static deformation of the wing under the air loads is also analyzed by using the finite element method. The results show that the used SMP material can provide enough strength and stiffness for the application. Finally, preliminary testing is conducted to investigate the recovery performances of SMP and its reinforced composites. During the test, the deployment and the wind-resistant ability of the morphing wing are dramatically improved by adding reinforced phase to the SMP.

  10. Accounting for uncertainty in DNA sequencing data.

    PubMed

    O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J

    2015-02-01

    Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    PubMed

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  12. Treatment of ARDS With Prone Positioning.

    PubMed

    Scholten, Eric L; Beitler, Jeremy R; Prisk, G Kim; Malhotra, Atul

    2017-01-01

    Prone positioning was first proposed in the 1970s as a method to improve gas exchange in ARDS. Subsequent observations of dramatic improvement in oxygenation with simple patient rotation motivated the next several decades of research. This work elucidated the physiological mechanisms underlying changes in gas exchange and respiratory mechanics with prone ventilation. However, translating physiological improvements into a clinical benefit has proved challenging; several contemporary trials showed no major clinical benefits with prone positioning. By optimizing patient selection and treatment protocols, the recent Proning Severe ARDS Patients (PROSEVA) trial demonstrated a significant mortality benefit with prone ventilation. This trial, and subsequent meta-analyses, support the role of prone positioning as an effective therapy to reduce mortality in severe ARDS, particularly when applied early with other lung-protective strategies. This review discusses the physiological principles, clinical evidence, and practical application of prone ventilation in ARDS. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  13. Electromagnetically-induced-absorption resonance with high contrast and narrow width in the Hanle configuration

    NASA Astrophysics Data System (ADS)

    Brazhnikov, D. V.; Taichenachev, A. V.; Tumaikin, A. M.; Yudin, V. I.

    2014-12-01

    The method for observing the high-contrast and narrow-width resonances of electromagnetically induced absorption (EIA) in the Hanle configuration under counter-propagating pump and probe light waves is proposed. Here, as an example, we study a ‘dark’ type of atomic dipole transition {{F}\\text{g}}={1}\\to {{F}\\text{e}}={1} in D1 line of 87Rb, where usually the electromagnetically induced transparency can be observed. To obtain the EIA signal one should properly choose the polarizations of light waves and intensities. In contrast to regular schemes for observing EIA signals (under a single traveling light wave in the Hanle configuration or under a bichromatic light field consisting of two traveling waves), the proposed scheme allows one to use buffer gas for significantly improving the properties of the resonance. Also the dramatic influence of atomic transition openness on the contrast of the resonance is revealed, which is advantageous in comparison with cyclic atomic transitions. The nonlinear resonances in a probe-wave transmitted signal with contrast close to 100% and sub-kHz widths can be obtained. The results are interesting in high-resolution spectroscopy, nonlinear and magneto-optics.

  14. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  15. Rank Dynamics of Word Usage at Multiple Scales

    NASA Astrophysics Data System (ADS)

    Morales, José A.; Colman, Ewan; Sánchez, Sergio; Sánchez-Puig, Fernanda; Pineda, Carlos; Iñiguez, Gerardo; Cocho, Germinal; Flores, Jorge; Gershenson, Carlos

    2018-05-01

    The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.

  16. Optimizing Cubature for Efficient Integration of Subspace Deformations

    PubMed Central

    An, Steven S.; Kim, Theodore; James, Doug L.

    2009-01-01

    We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777

  17. Design and implementation of quadrature bandpass sigma-delta modulator used in low-IF RF receiver

    NASA Astrophysics Data System (ADS)

    Ge, Binjie; Li, Yan; Yu, Hang; Feng, Xiaoxing

    2018-05-01

    This paper presents the design and implementation of quadrature bandpass sigma-delta modulator. A pole movement method for transforming real sigma-delta modulator to a quadrature one is proposed by detailed study of the relationship of noise-shaping center frequency and integrator pole position in sigma-delta modulator. The proposed modulator uses sampling capacitor sharing switched capacitor integrator, and achieves a very small feedback coefficient by a series capacitor network, and those two techniques can dramatically reduce capacitor area. Quantizer output-dependent dummy capacitor load for reference voltage buffer can compensate signal-dependent noise that is caused by load variation. This paper designs a quadrature bandpass Sigma-Delta modulator for 2.4 GHz low IF receivers that achieve 69 dB SNDR at 1 MHz BW and -1 MHz IF with 48 MHz clock. The chip is fabricated with SMIC 0.18 μm CMOS technology, it achieves a total power current of 2.1 mA, and the chip area is 0.48 mm2. Project supported by the National Natural Science Foundation of China (Nos. 61471245, U1201256), the Guangdong Province Foundation (No. 2014B090901031), and the Shenzhen Foundation (Nos. JCYJ20160308095019383, JSGG20150529160945187).

  18. Revisiting flow maps: a classification and a 3D alternative to visual clutter

    NASA Astrophysics Data System (ADS)

    Gu, Yuhang; Kraak, Menno-Jan; Engelhardt, Yuri

    2018-05-01

    Flow maps have long been servicing people in exploring movement by representing origin-destination data (OD data). Due to recent developments in data collecting techniques the amount of movement data is increasing dramatically. With such huge amounts of data, visual clutter in flow maps is becoming a challenge. This paper revisits flow maps, provides an overview of the characteristics of OD data and proposes a classification system for flow maps. For dealing with problems of visual clutter, 3D flow maps are proposed as potential alternative to 2D flow maps.

  19. Simultaneous estimation of transcutaneous bilirubin, hemoglobin, and melanin based on diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Abdul, Wares MD.; Ohtsu, Mizuki; Nakano, Kazuya; Haneishi, Hideaki

    2018-02-01

    We propose a method to estimate transcutaneous bilirubin, hemoglobin, and melanin based on the diffuse reflectance spectroscopy. In the proposed method, the Monte Carlo simulation-based multiple regression analysis for an absorbance spectrum in the visible wavelength region (460-590 nm) is used to specify the concentrations of bilirubin (Cbil), oxygenated hemoglobin (Coh), deoxygenated hemoglobin (Cdh), and melanin (Cm). Using the absorbance spectrum calculated from the measured diffuse reflectance spectrum as a response variable and the extinction coefficients of bilirubin, oxygenated hemoglobin, deoxygenated hemoglobin, and melanin, as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of bilirubin, oxygenated hemoglobin, deoxygenated hemoglobin, and melanin, are then determined from the regression coefficients using conversion vectors that are numerically deduced in advance by the Monte Carlo simulations for light transport in skin. Total hemoglobin concentration (Cth) and tissue oxygen saturation (StO2) are simply calculated from the oxygenated hemoglobin and deoxygenated hemoglobin. In vivo animal experiments with bile duct ligation in rats demonstrated that the estimated Cbil is increased after ligation of bile duct and reaches to around 20 mg/dl at 72 h after the onset of the ligation, which corresponds to the reference value of Cbil measured by a commercially available transcutaneous bilirubin meter. We also performed in vivo experiments with rats while varying the fraction of inspired oxygen (FiO2). Coh and Cdh decreased and increased, respectively, as FiO2 decreased. Consequently, StO2 was dramatically decreased. The results in this study indicate potential of the method for simultaneous evaluation of multiple chromophores in skin tissue.

  20. An ROI multi-resolution compression method for 3D-HEVC

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  1. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  2. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.

    PubMed

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  3. Controlled formation of cyclopentane hydrate suspensions via capillary-driven jet break-up

    NASA Astrophysics Data System (ADS)

    Geri, Michela; McKinley, Gareth

    2017-11-01

    Clathrate hydrates are crystalline compounds that form when a lattice of hydrogen-bonded water molecules is filled by guest molecules sequestered from an adjacent gas or liquid phase. Being able to rapidly produce and transport synthetic hydrates is of great interest given their significant potential as a clean energy source and safe option for hydrogen storage. We propose a new method to rapidly produce cyclopentane hydrate suspensions at ambient pressure with tunable particle size distribution by taking advantage of the Rayleigh-Plateau instability to form a mono-disperse stream of droplets during the controlled break-up of a water jet. The droplets are immediately frozen into ice particles through immersion in a subcooled reservoir and converted into hydrates with a dramatic reduction in the nucleation induction time. By measuring the evolution of the rheological properties with time, we monitor the process of hydrates formation via surface crystallization and agglomeration with different droplet size distributions. This new method enables us to gain new insights into hydrate formation and transport which was previously hindered by uncontrolled droplet formation and hydrate nucleation processes. MITei Chevron Fellowship.

  4. Conductive hydrogel composed of 1,3,5-benzenetricarboxylic acid and Fe3+ used as enhanced electrochemical immunosensing substrate for tumor biomarker.

    PubMed

    Wang, Huiqiang; Han, Hongliang; Ma, Zhanfang

    2017-04-01

    In this work, a new conductive hydrogel was prepared by a simple cross-linking coordination method using 1,3,5-benzenetricarboxylic acid as the ligand and Fe 3+ as the metal ion. The hydrogel film was formed on a glassy carbon electrode (GCE) by a drop coating method, which can dramatically facilitate the transport of electrons. A sensitive label-free electrochemical immunosensor was fabricated following electrodeposition of gold nanoparticles (AuNPs) on a hydrogel film and immobilization of an antibody. Neuron-specific enolase (NSE), a lung cancer biomarker, was used as the model analyte to be detected. The proposed immunosensor exhibited a wide linear detection range of 1pgmL -1 to 200ngmL -1 and a limit of detection of 0.26pgmL -1 (the ratio of signal to noise (S/N)=3). Moreover, the detection of NSE in human serum samples showed satisfactory accuracy compared with the data determined by enzyme-linked immunosorbent assay (ELISA), indicating good analytical performance of the immunoassay. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Directional freezing for the cryopreservation of adherent mammalian cells on a substrate

    PubMed Central

    Braslavsky, Ido

    2018-01-01

    Successfully cryopreserving cells adhered to a substrate would facilitate the growth of a vital confluent cell culture after thawing while dramatically shortening the post-thaw culturing time. Herein we propose a controlled slow cooling method combining initial directional freezing followed by gradual cooling down to -80°C for robust preservation of cell monolayers adherent to a substrate. Using computer controlled cryostages we examined the effect of cooling rates and dimethylsulfoxide (DMSO) concentration on cell survival and established an optimal cryopreservation protocol. Experimental results show the highest post-thawing viability for directional ice growth at a speed of 30 μm/sec (equivalent to freezing rate of 3.8°C/min), followed by gradual cooling of the sample with decreasing rate of 0.5°C/min. Efficient cryopreservation of three widely used epithelial cell lines: IEC-18, HeLa, and Caco-2, provides proof-of-concept support for this new freezing protocol applied to adherent cells. This method is highly reproducible, significantly increases the post-thaw cell viability and can be readily applied for cryopreservation of cellular cultures in microfluidic devices. PMID:29447224

  6. Development of a low noise induction magnetic sensor using magnetic flux negative feedback in the time domain.

    PubMed

    Wang, X G; Shang, X L; Lin, J

    2016-05-01

    Time-domain electromagnetic system can implement great depth detection. As for the electromagnetic system, the receiver utilized an air coil sensor, and the matching mode of the sensor employed the resistance matching method. By using the resistance matching method, the vibration of the coil in the time domain can be effectively controlled. However, the noise of the sensor, especially the noise at the resonance frequency, will be increased as well. In this paper, a novel design of a low noise induction coil sensor is proposed, and the experimental data and noise characteristics are provided. The sensor is designed based on the principle that the amplified voltage will be converted to current under the influence of the feedback resistance of the coil. The feedback loop around the induction coil exerts a magnetic field and sends the negative feedback signal to the sensor. The paper analyses the influence of the closed magnetic feedback loop on both the bandwidth and the noise of the sensor. The signal-to-noise ratio is improved dramatically.

  7. Acetone gas sensor based on NiO/ZnO hollow spheres: Fast response and recovery, and low (ppb) detection limit.

    PubMed

    Liu, Chang; Zhao, Liupeng; Wang, Boqun; Sun, Peng; Wang, Qingji; Gao, Yuan; Liang, Xishuang; Zhang, Tong; Lu, Geyu

    2017-06-01

    NiO/ZnO composites were synthesized by decorating numerous NiO nanoparticles on the surfaces of well dispersed ZnO hollow spheres using a facile solvothermal method. Various kinds of characterization methods were utilized to investigate the structures and morphologies of the hybrid materials. The results revealed that the NiO nanoparticles with a size of ∼10nm were successfully distributed on the surfaces of ZnO hollow spheres in a discrete manner. As expected, the NiO/ZnO composites demonstrated dramatic improvements in sensing performances compared with pure ZnO hollow spheres. For example, the response of NiO/ZnO composites to 100ppm acetone was ∼29.8, which was nearly 4.6 times higher than that of primary ZnO at 275°C, and the response/recovery time were 1/20s, respectively. Meanwhile, the detection limit could extend down to ppb level. The likely reason for the improved gas sensing properties was also proposed. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Directional freezing for the cryopreservation of adherent mammalian cells on a substrate.

    PubMed

    Bahari, Liat; Bein, Amir; Yashunsky, Victor; Braslavsky, Ido

    2018-01-01

    Successfully cryopreserving cells adhered to a substrate would facilitate the growth of a vital confluent cell culture after thawing while dramatically shortening the post-thaw culturing time. Herein we propose a controlled slow cooling method combining initial directional freezing followed by gradual cooling down to -80°C for robust preservation of cell monolayers adherent to a substrate. Using computer controlled cryostages we examined the effect of cooling rates and dimethylsulfoxide (DMSO) concentration on cell survival and established an optimal cryopreservation protocol. Experimental results show the highest post-thawing viability for directional ice growth at a speed of 30 μm/sec (equivalent to freezing rate of 3.8°C/min), followed by gradual cooling of the sample with decreasing rate of 0.5°C/min. Efficient cryopreservation of three widely used epithelial cell lines: IEC-18, HeLa, and Caco-2, provides proof-of-concept support for this new freezing protocol applied to adherent cells. This method is highly reproducible, significantly increases the post-thaw cell viability and can be readily applied for cryopreservation of cellular cultures in microfluidic devices.

  9. Poly(acrylic acid)-templated silver nanoclusters as a platform for dual fluorometric turn-on and colorimetric detection of mercury (II) ions.

    PubMed

    Tao, Yu; Lin, Youhui; Huang, Zhenzhen; Ren, Jinsong; Qu, Xiaogang

    2012-01-15

    An easy prepared fluorescence turn-on and colorimetric dual channel probe was developed for rapid assay of Hg(2+) ions with high sensitivity and selectivity by using poly(acrylic acid)-templated silver nanoclusters (PAA-AgNCs). The PAA-AgNCs exhibited weak fluorescence, while upon the addition of Hg(2+) ions, AgNCs gives a dramatic increase in fluorescence as a result of the changes of the AgNCs states. The detection limit was estimated to be 2 nM, which is much lower than the Hg(2+) detection requirement for drinking water of U.S. Environmental Protection Agency, and the turn-on sensing mode offers additional advantage to efficiently reduce background noise. Also, a colorimetric assay of Hg(2+) ions can be realized due to the observed absorbance changes of the AgNCs. More importantly, the method was successfully applied to the determination of Hg(2+) ions in real water samples, which suggests our proposed method has a great potential of application in environmental monitoring. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  11. A New Threshold of Precision, 30 micro-arcsecond Parallaxes and Beyond

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2015-10-01

    The ESA Gaia mission is poised to dramatically tighten the distancescale for all stellar types, with a billion Milky Way parallaxesreaching 10 microarcseconds for V<12 mag, 20 microarcseconds atV=15, and 200 microarcseconds at V=20. These data will have enormousimpact on nearly any investigation that makes use of stellarastrophysics, including stellar evolution, galactic archeology,exoplanet characterization, and physical cosmology. Measurements thisrevolutionary demand a number of independent tests for the presence ofsystematic errors. We have developed a method that can measureparallaxes of the best-observed stars to 30 microarcseconds with WFC3using spatial scanning (Riess et al. 2014). We propose to obtain 4 newepochs of spatial-scanning measurements for 9 previously observedfields in order to collect 150 stellar parallaxes and improve thesample mean precision to 30 microarcseconds, sufficient for ameaningful test of Gaia. The proposed doubling of the temporalcoverage for these fields will deliver (1) a 40% improvement in theprecision of the HST parallaxes which otherwise limit the precision ofthe comparison, (2) the ability to weed out relevant astrometricbinaries which could otherwise pollute a number of parallaxmeasurements and the comparison to Gaia, and (3) a significant overlapin time of the HST and Gaia measurements, insuring that all parallaxesare subject to similar orbital motion in the event of undetectedbinarity, thus improving the accuracy of the comparison. We propose tofollow the old Russian proverb - trust but verify.

  12. A New Threshold of Precision, 30 micro-arcsecond Parallaxes and Beyond

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2016-10-01

    The ESA Gaia mission is poised to dramatically tighten the distancescale for all stellar types, with a billion Milky Way parallaxesreaching 10 microarcseconds for V<12 mag, 20 microarcseconds atV=15, and 200 microarcseconds at V=20. These data will have enormousimpact on nearly any investigation that makes use of stellarastrophysics, including stellar evolution, galactic archeology,exoplanet characterization, and physical cosmology. Measurements thisrevolutionary demand a number of independent tests for the presence ofsystematic errors. We have developed a method that can measureparallaxes of the best-observed stars to 30 microarcseconds with WFC3using spatial scanning (Riess et al. 2014). We propose to obtain 4 newepochs of spatial-scanning measurements for 9 previously observedfields in order to collect 150 stellar parallaxes and improve thesample mean precision to 30 microarcseconds, sufficient for ameaningful test of Gaia. The proposed doubling of the temporalcoverage for these fields will deliver (1) a 40% improvement in theprecision of the HST parallaxes which otherwise limit the precision ofthe comparison, (2) the ability to weed out relevant astrometricbinaries which could otherwise pollute a number of parallaxmeasurements and the comparison to Gaia, and (3) a significant overlapin time of the HST and Gaia measurements, insuring that all parallaxesare subject to similar orbital motion in the event of undetectedbinarity, thus improving the accuracy of the comparison. We propose tofollow the old Russian proverb - trust but verify.

  13. A Gaussian Approximation Approach for Value of Information Analysis.

    PubMed

    Jalal, Hawre; Alarid-Escudero, Fernando

    2018-02-01

    Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.

  14. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  15. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  16. The current status of medical malpractice countersuits.

    PubMed

    Sokol, D J

    1985-01-01

    The dramatic growth of medical malpractice litigation in recent decades has contributed significantly to an overall increase in health care costs in this country. Although lawmakers, physicians, and other responsible citizens have proposed numerous solutions in an effort to curb the crisis, these proposals have generally been ineffective. In this Article the Author endorses countersuits as the most appropriate response to frivolous medical malpractice actions. The Author also suggests that contingent fee systems, coupled with the economic motivation of private insurers to settle claims quickly, provide incentive for plaintiffs to initiate frivolous claims. This Article analyzes the general legal approaches available for countersuits, emphasizing recent successful actions based on malicious prosecution and abuse of process, and proposes more widespread use of these approaches.

  17. Extracting distribution and expansion of rubber plantations from Landsat imagery using the C5.0 decision tree method

    NASA Astrophysics Data System (ADS)

    Sun, Zhongchang; Leinenkugel, Patrick; Guo, Huadong; Huang, Chong; Kuenzer, Claudia

    2017-04-01

    Natural tropical rainforests in China's Xishuangbanna region have undergone dramatic conversion to rubber plantations in recent decades, resulting in altering the region's environment and ecological systems. Therefore, it is of great importance for local environmental and ecological protection agencies to research the distribution and expansion of rubber plantations. The objective of this paper is to monitor dynamic changes of rubber plantations in China's Xishuangbanna region based on multitemporal Landsat images (acquired in 1989, 2000, and 2013) using a C5.0-based decision-tree method. A practical and semiautomatic data processing procedure for mapping rubber plantations was proposed. Especially, haze removal and deshadowing were proposed to perform atmospheric and topographic correction and reduce the effects of haze, shadow, and terrain. Our results showed that the atmospheric and topographic correction could improve the extraction accuracy of rubber plantations, especially in mountainous areas. The overall classification accuracies were 84.2%, 83.9%, and 86.5% for the Landsat images acquired in 1989, 2000, and 2013, respectively. This study also found that the Landsat-8 images could provide significant improvement in the ability to identify rubber plantations. The extracted maps showed the selected study area underwent rapid conversion of natural and seminatural forest to a rubber plantations from 1989 to 2013. The rubber plantation area increased from 2.8% in 1989 to 17.8% in 2013, while the forest/woodland area decreased from 75.6% in 1989 to 44.8% in 2013. The proposed data processing procedure is a promising approach to mapping the spatial distribution and temporal dynamics of rubber plantations on a regional scale.

  18. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  19. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  20. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He

    2016-06-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).

  1. Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains

    PubMed Central

    Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph

    2014-01-01

    The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049

  2. Detection and Alignment of 3D Domain Swapping Proteins Using Angle-Distance Image-Based Secondary Structural Matching Techniques

    PubMed Central

    Wang, Hsin-Wei; Hsu, Yen-Chu; Hwang, Jenn-Kang; Lyu, Ping-Chiang; Pai, Tun-Wen; Tang, Chuan Yi

    2010-01-01

    This work presents a novel detection method for three-dimensional domain swapping (DS), a mechanism for forming protein quaternary structures that can be visualized as if monomers had “opened” their “closed” structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D) image transformations of secondary structural elements (SSEs), specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC) and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD) computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its hinge loop determination is comparable to that of manual inspection. This method has been implemented as a web-based tool, which requires two protein structures as the input and then the type and/or existence of DS relationships between the input structures are determined according to the A-D image-based structural alignments and the DS score. The proposed method is expected to trigger large-scale studies of this interesting structural phenomenon and facilitate related applications. PMID:20976204

  3. Two-phase computerized planning of cryosurgery using bubble-packing and force-field analogy.

    PubMed

    Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2006-02-01

    Cryosurgery is the destruction of undesired tissues by freezing, as in prostate cryosurgery, for example. Minimally invasive cryosurgery is currently performed by means of an array of cryoprobes, each in the shape of a long hypodermic needle. The optimal arrangement of the cryoprobes, which is known to have a dramatic effect on the quality of the cryoprocedure, remains an art held by the cryosurgeon, based on the cryosurgeon's experience and "rules of thumb." An automated computerized technique for cryosurgery planning is the subject matter of the current paper, in an effort to improve the quality of cryosurgery. A two-phase optimization method is proposed for this purpose, based on two previous and independent developments by this research team. Phase I is based on a bubble-packing method, previously used as an efficient method for finite element meshing. Phase II is based on a force-field analogy method, which has proven to be robust at the expense of a typically long runtime. As a proof-of-concept, results are demonstrated on a two-dimensional case of a prostate cross section. The major contribution of this study is to affirm that in many instances cryosurgery planning can be performed without extremely expensive simulations of bioheat transfer, achieved in Phase I. This new method of planning has proven to reduce planning runtime from hours to minutes, making automated planning practical in a clinical time frame.

  4. A semi-classical approach to the calculation of highly excited rotational energies for asymmetric-top molecules

    PubMed Central

    Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergey N.; Yachmenev, Andrey

    2017-01-01

    We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fullly quantum-mechanical variational approach. Test calculations for excited states of SO2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. PMID:28000807

  5. Preservation And Processing Methods For Molecular Genetic Detection And Quantification Of Nosema Ceranae

    USDA-ARS?s Scientific Manuscript database

    The prevalence of Nosema ceranae in managed honey bee colonies has increased dramatically in the past 10 – 20 years worldwide. A variety of genetic testing methods for species identification and prevalence are now available. However sample size and preservation method of samples prior to testing hav...

  6. Teaching Hearing-Impaired Children in Iraq Using a New Teaching Method.

    ERIC Educational Resources Information Center

    Harris, N. D. C.; Mustafa, N.

    1986-01-01

    Describes a field test and results of a new didactic teaching method involving resource-based learning to teach various aspects of mathematics and science (fractions, magnetism, planets) to elementary aged hearing impaired student in Iraq. The dramatic improvements in language for learners is described and implications of the methods are…

  7. Integration of Sparse Multi-modality Representation and Anatomical Constraint for Isointense Infant Brain MR Image Segmentation

    PubMed Central

    Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615

  8. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  9. Channelized relevance vector machine as a numerical observer for cardiac perfusion defect detection task

    NASA Astrophysics Data System (ADS)

    Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.

    2011-03-01

    In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.

  10. Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region

    NASA Astrophysics Data System (ADS)

    Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

    2003-06-01

    The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

  11. Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.

    PubMed

    Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

    2003-06-01

    The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

  12. Evaluation of a Cubature Kalman Filtering-Based Phase Unwrapping Method for Differential Interferograms with High Noise in Coal Mining Areas

    PubMed Central

    Liu, Wanli; Bian, Zhengfu; Liu, Zhenguo; Zhang, Qiuzhao

    2015-01-01

    Differential interferometric synthetic aperture radar has been shown to be effective for monitoring subsidence in coal mining areas. Phase unwrapping can have a dramatic influence on the monitoring result. In this paper, a filtering-based phase unwrapping algorithm in combination with path-following is introduced to unwrap differential interferograms with high noise in mining areas. It can perform simultaneous noise filtering and phase unwrapping so that the pre-filtering steps can be omitted, thus usually retaining more details and improving the detectable deformation. For the method, the nonlinear measurement model of phase unwrapping is processed using a simplified Cubature Kalman filtering, which is an effective and efficient tool used in many nonlinear fields. Three case studies are designed to evaluate the performance of the method. In Case 1, two tests are designed to evaluate the performance of the method under different factors including the number of multi-looks and path-guiding indexes. The result demonstrates that the unwrapped results are sensitive to the number of multi-looks and that the Fisher Distance is the most suitable path-guiding index for our study. Two case studies are then designed to evaluate the feasibility of the proposed phase unwrapping method based on Cubature Kalman filtering. The results indicate that, compared with the popular Minimum Cost Flow method, the Cubature Kalman filtering-based phase unwrapping can achieve promising results without pre-filtering and is an appropriate method for coal mining areas with high noise. PMID:26153776

  13. The Development Of New Space Charge Compensation Methods For Multi-Components Ion Beam Extracted From ECR Ion Source at IMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, L.; Zhao, H.W.; Cao, Y.

    2005-03-15

    Two new space charge compensation methods developed in IMP are discussed in this paper. There are negative high voltage electrode method (NHVEM) and electronegative charge gas method (EGM). Some valuable experimental data have been achieved, especially using electronegative gas method in O6+ and O7+ dramatic and stable increasing of ion current was observed.

  14. The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritschl, Ludwig; Fleischmann, Christof; Kuntz, Jan, E-mail: j.kuntz@dkfz.de

    Purpose: In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled datamore » set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. Methods: The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. Results: The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. Conclusions: The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.« less

  15. Mapping "Trauma-Informed" Legislative Proposals in U.S. Congress.

    PubMed

    Purtle, Jonathan; Lewis, Michael

    2017-11-01

    Despite calls for translation of trauma-informed practice into public policy, no empirical research has investigated how the construct has been integrated into policy proposals. This policy mapping study identified and analyzed every bill introduced in US Congress that mentioned "trauma-informed" between 1973 and 2015. Forty-nine bills and 71 bill sections mentioned the construct. The number of trauma-informed bills introduced annually increased dramatically, from 0 in 2010 to 28 in 2015. Trauma-informed bill sections targeted a range of sectors, but disproportionally focused on youth (73.2%). Only three bills defined "trauma-informed." Implications within the context of a changing political environment are discussed.

  16. Administrative compensation of medical injuries: a hardy perennial blooms again.

    PubMed

    Barringer, Paul J; Studdert, David M; Kachalia, Allen B; Mello, Michelle M

    2008-08-01

    Periods in which the costs of personal injury litigation and liability insurance have risen dramatically have often provoked calls for reform of the tort system, and medical malpractice is no exception. One proposal for fundamental reform made during several of these volatile periods has been to relocate personal injury disputes from the tort system to an alternative, administrative forum. In the medical injury realm, a leading incarnation of such proposals in recent years has been the idea of establishing specialized administrative "health courts." Despite considerable stakeholder and policy-maker interest, administrative compensation proposals have tended to struggle for broad political acceptance. In this article, we consider the historical experience of administrative medical injury compensation proposals, particularly in light of comparative examples in the context of workplace injuries, automobile injuries, and vaccine injuries. We conclude by examining conditions that may facilitate or impede progress toward establishing demonstration projects of health courts.

  17. Handheld microwave bomb-detecting imaging system

    NASA Astrophysics Data System (ADS)

    Gorwara, Ashok; Molchanov, Pavlo

    2017-05-01

    Proposed novel imaging technique will provide all weather high-resolution imaging and recognition capability for RF/Microwave signals with good penetration through highly scattered media: fog, snow, dust, smoke, even foliage, camouflage, walls and ground. Image resolution in proposed imaging system is not limited by diffraction and will be determined by processor and sampling frequency. Proposed imaging system can simultaneously cover wide field of view, detect multiple targets and can be multi-frequency, multi-function. Directional antennas in imaging system can be close positioned and installed in cell phone size handheld device, on small aircraft or distributed around protected border or object. Non-scanning monopulse system allows dramatically decrease in transmitting power and at the same time provides increased imaging range by integrating 2-3 orders more signals than regular scanning imaging systems.

  18. The Logic of Wikis: The Possibilities of the Web 2.0 Classroom

    ERIC Educational Resources Information Center

    Glassman, Michael; Kang, Min Ju

    2011-01-01

    The emergence of Web 2.0 and some of its ascendant tools such as blogs and wikis have the potential to dramatically change education, both in how we conceptualize and operationalize processes and strategies. We argue in this paper that it is a change that has been over a century in coming. The promise of the Web 2.0 is similar to ideas proposed by…

  19. First-Class State Change in Plaid

    DTIC Science & Technology

    2011-10-01

    conceptual state change. State change is pervasive in the natural world; as a dramatic example, consider the state transition from egg, to caterpillar ...All this raises a natural question: why not support abstract states in programming languages? We previously proposed Typestate-Oriented Programming as...well as trait-like state composition. Plaid has been implemented, and has proven effective for writing a diverse set of small and medium- sized (up to

  20. Real-Time Closed Loop Modulated Turbine Cooling

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Culley, Dennis E.; Eldridge, Jeffrey; Jones, Scott; Woike, Mark; Cuy, Michael

    2014-01-01

    It has been noted by industry that in addition to dramatic variations of temperature over a given blade surface, blade-to-blade variations also exist despite identical design. These variations result from manufacturing variations, uneven wear and deposition over the life of the part as well as limitations in the uniformity of coolant distribution in the baseline cooling design. It is proposed to combine recent advances in optical sensing, actuation, and film cooling concepts to develop a workable active, closed-loop modulated turbine cooling system to improve by 10 to 20 the turbine thermal state over the flight mission, to improve engine life and to dramatically reduce turbine cooling air usage and aircraft fuel burn. A reduction in oxides of nitrogen (NOx) can also be achieved by using the excess coolant to improve mixing in the combustor especially for rotorcraft engines. Recent patents filed by industry and universities relate to modulating endwall cooling using valves. These schemes are complex, add weight and are limited to the endwalls. The novelty of the proposed approach is twofold 1) Fluidic diverters that have no moving parts are used to modulate cooling and can operate under a wide range of conditions and environments. 2) Real-time optical sensing to map the thermal state of the turbine has never been attempted in realistic engine conditions.

  1. Cast B2-phase iron-aluminum alloys with improved fluidity

    DOEpatents

    Maziasz, Philip J.; Paris, Alan M.; Vought, Joseph D.

    2002-01-01

    Systems and methods are described for iron aluminum alloys. A composition includes iron, aluminum and manganese. A method includes providing an alloy including iron, aluminum and manganese; and processing the alloy. The systems and methods provide advantages because additions of manganese to iron aluminum alloys dramatically increase the fluidity of the alloys prior to solidification during casting.

  2. Non-Traditional Methods of Improving the Communication Skills of Disadvantaged Students

    ERIC Educational Resources Information Center

    Wilson, Brenda M.; Power, Marian E.

    1978-01-01

    Educators are encouraged to use some of the non-traditional student-centered methods for improving the communication skills of disadvantaged students, including technological aids such as books, tapes, cable T.V., video tapes, computers, etc., and devices such as role playing and dramatizations. (AM)

  3. Deformable segmentation via sparse representation and dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Novel approach to integrated DNA adductomics for the assessment of in vitro and in vivo environmental exposures.

    PubMed

    Chang, Yuan-Jhe; Cooke, Marcus S; Hu, Chiung-Wen; Chao, Mu-Rong

    2018-06-25

    Adductomics is expected to be useful in the characterization of the exposome, which is a new paradigm for studying the sum of environmental causes of diseases. DNA adductomics is emerging as a powerful method for detecting DNA adducts, but reliable assays for its widespread, routine use are currently lacking. We propose a novel integrated strategy for the establishment of a DNA adductomic approach, using liquid chromatography-triple quadrupole tandem mass spectrometry (LC-QqQ-MS/MS), operating in constant neutral loss scan mode, screening for both known and unknown DNA adducts in a single injection. The LC-QqQ-MS/MS was optimized using a representative sample of 23 modified 2'-deoxyribonucleosides reflecting a range of biologically relevant DNA lesions. Six internal standards (ISTDs) were evaluated for their ability to normalize, and hence correct, possible variation in peak intensities arising from matrix effects, and the quantities of DNA injected. The results revealed that, with appropriate ISTDs adjustment, any bias can be dramatically reduced from 370 to 8.4%. Identification of the informative DNA adducts was achieved by triggering fragmentation spectra of target ions. The LC-QqQ-MS/MS method was successfully applied to in vitro and in vivo studies to screen for DNA adducts formed following representative environmental exposures: methyl methanesulfonate (MMS) and five N-nitrosamines. Interestingly, five new DNA adducts, induced by MMS, were discovered using our adductomic approach-an added strength. The proposed integrated strategy provides a path forward for DNA adductomics to become a standard method to discover differences in DNA adduct fingerprints between populations exposed to genotoxins, and facilitate the field of exposomics.

  5. Marble Ageing Characterization by Acoustic Waves

    NASA Astrophysics Data System (ADS)

    Boudani, Mohamed El; Wilkie-Chancellier, Nicolas; Martinez, Loïc; Hébert, Ronan; Rolland, Olivier; Forst, Sébastien; Vergès-Belmin, Véronique; Serfaty, Stéphane

    In cultural heritage, statue marble characterization by acoustic waves is a well-known non-destructive method. Such investigations through the statues by time of flight method (TOF) point out sound speeds decrease with ageing. However for outdoor stored statues as the ones in the gardens of Chateau de Versailles, ageing affects mainly the surface of the Carrara marble. The present paper proposes an experimental study of the marble acoustic properties variations during accelerated laboratory ageing. The surface degradation of the marble is reproduced in laboratory for 29 mm thick marble samples by using heating/cooling thermal cycles on one face of a marble plate. Acoustic waves are generated by 1 MHz central frequency contact transducers excited by a voltage pulse placed on both sides of the plate. During the ageing and by using ad hoc transducers, the marble samples are characterized in transmission, along their volume by shear, compressional TOF measurements and along their surface by Rayleigh waves measurements. For Rayleigh waves, both TOF by transducers and laser vibrometry methods are used to detect the Rayleigh wave. The transmission measurements point out a deep decrease of the waves speeds in conjunction with a dramatic decrease of the maximum frequency transmitted. The marble acts as a low pass filter whose characteristic frequency cut decreases with ageing. This pattern occurs also for the Rayleigh wave surface measurements. The speed change in conjunction with the bandwidth translation is shown to be correlated to the material de-structuration during ageing. With a similar behavior but reversed in time, the same king of phenomena have been observed trough sol-gel materials during their structuration from liquid to solid state (Martinez, L. et all (2004). "Chirp-Z analysis for sol-gel transition monitoring". Ultrasonics, 42(1), 507-510.). A model is proposed to interpret the acoustical measurements

  6. Chemical Insight Into The Origin of Red and Blue Photoluminescence Arising From Freestanding Silicon Nanocrystals

    PubMed Central

    Dasog, Mita; Yang, Zhenyu; Regli, Sarah; Atkins, Tonya M.; Faramus, Angelique; Singh, Mani P.; Muthuswamy, Elayaraja; Kauzlarich, Susan M.; Tilley, Richard D.; Veinot, Jonathan G. C.

    2013-01-01

    Silicon nanocrystals (Si NCs) are attractive functional materials. They are compatible with standard electronics and communications platforms as well being biocompatible. Numerous methods have been developed to realize size-controlled Si NC synthesis. While these procedures produce Si NCs that appear identical, their optical responses can differ dramatically. Si NCs prepared using high-temperature methods routinely exhibit photoluminescence agreeing with the effective mass approximation (EMA), while those prepared via solution methods exhibit blue emission that is somewhat independent of particle size. Despite many proposals, a definitive explanation for this difference has been elusive for no less than a decade. This apparent dichotomy brings into question our understanding of Si NC properties and potentially limits the scope of their application. The present contribution takes a substantial step forward toward identifying the origin of the blue emission that is not expected based upon EMA predictions. It describes a detailed comparison of Si NCs obtained from three of the most widely cited procedures as well as the conversion of red-emitting Si NCs to blue-emitters upon exposure to nitrogen containing reagents. Analysis of the evidence is consistent with the hypothesis that the presence of trace nitrogen and oxygen even at the ppm level in Si NCs gives rise to the blue emission. PMID:23394574

  7. Chemical insight into the origin of red and blue photoluminescence arising from freestanding silicon nanocrystals.

    PubMed

    Dasog, Mita; Yang, Zhenyu; Regli, Sarah; Atkins, Tonya M; Faramus, Angelique; Singh, Mani P; Muthuswamy, Elayaraja; Kauzlarich, Susan M; Tilley, Richard D; Veinot, Jonathan G C

    2013-03-26

    Silicon nanocrystals (Si NCs) are attractive functional materials. They are compatible with standard electronics and communications platforms and are biocompatible. Numerous methods have been developed to realize size-controlled Si NC synthesis. While these procedures produce Si NCs that appear identical, their optical responses can differ dramatically. Si NCs prepared using high-temperature methods routinely exhibit photoluminescence agreeing with the effective mass approximation (EMA), while those prepared via solution methods exhibit blue emission that is somewhat independent of particle size. Despite many proposals, a definitive explanation for this difference has been elusive for no less than a decade. This apparent dichotomy brings into question our understanding of Si NC properties and potentially limits the scope of their application. The present contribution takes a substantial step forward toward identifying the origin of the blue emission that is not expected based upon EMA predictions. It describes a detailed comparison of Si NCs obtained from three of the most widely cited procedures as well as the conversion of red-emitting Si NCs to blue emitters upon exposure to nitrogen-containing reagents. Analysis of the evidence is consistent with the hypothesis that the presence of trace nitrogen and oxygen even at the parts per million level in Si NCs gives rise to the blue emission.

  8. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    NASA Astrophysics Data System (ADS)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  9. Two-phase Computerized Planning of Cryosurgery Using Bubble-packing and Force-field Analogy

    PubMed Central

    Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2007-01-01

    Background: Cryosurgery is the destruction of undesired tissues by freezing, as in prostate cryosurgery, for example. Minimally-invasive cryosurgery is currently performed by means of an array of cryoprobes, each in the shape of a long hypodermic needle. The optimal arrangement of the cryoprobes, which is known to have a dramatic effect on the quality of the cryoprocedure, remains an art held by the cryosurgeon, based on the cryosurgeon's experience and “rules of thumb.” An automated computerized technique for cryosurgery planning is the subject matter of the current report, in an effort to improve the quality of cryosurgery. Method of Approach: A two-phase optimization method is proposed for this purpose, based on two previous and independent developments by this research team. Phase I is based on a bubble-packing method, previously used as an efficient method for finite elements meshing. Phase II is based on a force-field analogy method, which has proven to be robust at the expense of a typically long runtime. Results: As a proof-of-concept, results are demonstrated on a 2D case of a prostate cross-section. The major contribution of this study is to affirm that in many instances cryosurgery planning can be performed without extremely expensive simulations of bioheat transfer, achieved in Phase I. Conclusions: This new method of planning has proven to reduce planning runtime from hours to minutes, making automated planning practical in a clinical time frame. PMID:16532617

  10. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  11. multiDE: a dimension reduced model based statistical method for differential expression analysis using RNA-sequencing data with multiple treatment conditions.

    PubMed

    Kang, Guangliang; Du, Li; Zhang, Hong

    2016-06-22

    The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.

  12. Balance training using an interactive game to enhance the use of the affected side after stroke.

    PubMed

    Ciou, Shih-Hsiang; Hwang, Yuh-Shyan; Chen, Chih-Chen; Chen, Shih-Ching; Chou, Shih-Wei; Chen, Yu-Luen

    2015-12-01

    [Purpose] Stroke and other cerebrovascular diseases are major causes of adult mobility problems. Because stroke immobilizes the affected body part, balance training uses the healthy body part to complete the target movement. The muscle utilization rate on the stroke affected side is often reduced which further hinders affected side functional recovery in rehabilitation. [Subjects and Methods] This study tested a newly-developed interactive device with two force plates to measuring right and left side centers of pressure, to establish its efficacy in the improvement of the static standing ability of patients with hemiplegia. An interactive virtual reality game with different side reaction ratios was used to improve patient balance. The feasibility of the proposed approach was experimentally demonstrated. [Results] Although the non-affected-side is usually used to support the body weight in the standing position, under certain circumstances the patients could switch to using the affected side. A dramatic improvement in static standing balance control was achieved in the eyes open condition. [Conclusion] The proposed dual force plate technique used in this study separately measured the affected and non-affected-side centers of pressure. Based on this approach, different side ratio integration was achieved using an interactive game that helped stroke patients improve balance on the affected side. Only the patient who had suffered stroke relatively recently benefited significantly. The proposed technique is of little benefit for patients whose mobility has stagnated to a certain level.

  13. Analysis and compensation of reference frequency mismatch in multiple-frequency feedforward active noise and vibration control system

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu

    2017-11-01

    In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.

  14. The false alarm hypothesis: Food allergy is associated with high dietary advanced glycation end-products and proglycating dietary sugars that mimic alarmins.

    PubMed

    Smith, Peter K; Masilamani, Madhan; Li, Xiu-Min; Sampson, Hugh A

    2017-02-01

    The incidence of food allergy has increased dramatically in the last few decades in westernized developed countries. We propose that the Western lifestyle and diet promote innate danger signals and immune responses through production of "alarmins." Alarmins are endogenous molecules secreted from cells undergoing nonprogrammed cell death that signal tissue and cell damage. High molecular group S (HMGB1) is a major alarmin that binds to the receptor for advanced glycation end-products (RAGE). Advanced glycation end-products (AGEs) are also present in foods. We propose the "false alarm" hypothesis, in which AGEs that are present in or formed from the food in our diet are predisposing to food allergy. The Western diet is high in AGEs, which are derived from cooked meat, oils, and cheese. AGEs are also formed in the presence of a high concentration of sugars. We propose that a diet high in AGEs and AGE-forming sugars results in misinterpretation of a threat from dietary allergens, promoting the development of food allergy. AGEs and other alarmins inadvertently prime innate signaling through multiple mechanisms, resulting in the development of allergic phenotypes. Current hypotheses and models of food allergy do not adequately explain the dramatic increase in food allergy in Western countries. Dietary AGEs and AGE-forming sugars might be the missing link, a hypothesis supported by a number of convincing epidemiologic and experimental observations, as discussed in this article. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Genetic Stock Identification Of Production Colonies Of Russian Honey Bees

    USDA-ARS?s Scientific Manuscript database

    The prevalence of Nosema ceranae in managed honey bee colonies has increased dramatically in the past 10 – 20 years worldwide. A variety of genetic testing methods for species identification and prevalence are now available. However sample size and preservation method of samples prior to testing hav...

  16. Position Description Analysis: A Method for Describing Academic Roles and Functions.

    ERIC Educational Resources Information Center

    Renner, K. Edward; Skibbens, Ronald J.

    1990-01-01

    The Position Description Analysis method for assessing the discrepancy between status quo and specializations needed by institutions to meet new demands and expectations is presented using Dalhousie University (Nova Scotia) as a case study. Dramatic realignment of fields of specialization and change strategies accommodating the aging professoriate…

  17. Human factors facilitating the spread of a parasitic honey bee in South Africa.

    PubMed

    Dietemann, Vincent; Lubbe, Annelize; Crewe, Robin M

    2006-02-01

    Workers of the honey bee subspecies Apis mellifera capensis (Eschscholtz) produce female offspring by thelytokous parthenogenesis and can parasitize colonies of other subspecies. In 1990, translocation of 400 colonies of A. m. capensis into the distribution area of A. m. scutellata by a commercial beekeeper triggered a dramatic parasitic phenomenon. Parasitized colonies died within a few months of infestation, and this resulted in the loss of tens of thousands of colonies by commercial beekeepers in the A. m. scutellata range in South Africa. To deal with the problem and to identify methods that would limit the impact of the social parasite, we investigated the link between beekeeping management and severity of parasitic infestations in terms of colony mortality and productivity. We demonstrate that colonies from apiaries subjected to migrations are very susceptible to infestation and consequently show dramatic mortality. Their productivity is also inferior to sedentary colonies and those in isolated apiaries in terms of honey yield and brood quantity. Furthermore, by concentrating hives in small areas and often in the vicinity of other beekeepers, cross-infestations can easily occur. This can undermine previously parasite-free beekeeping businesses. As a result of our surveys, we propose beekeeping practices based on locally trapped bees, reduced migration, and better control of parasite spread, thus promoting the conservation of these pollinators. If followed by all the South African beekeepers, these measures should limit the spread of the parasite until it is eliminated within a few years, after which full migratory beekeeping practices could resume.

  18. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    PubMed

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-08

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  19. Bayesian probabilistic population projections for all countries.

    PubMed

    Raftery, Adrian E; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K

    2012-08-28

    Projections of countries' future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950-1990 are used for estimation, and applied to predict 1990-2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20-64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades.

  20. A novel environmental DNA approach to quantify the cryptic invasion of non-native genotypes.

    PubMed

    Uchii, Kimiko; Doi, Hideyuki; Minamoto, Toshifumi

    2016-03-01

    The invasion of non-native species that are closely related to native species can lead to competitive elimination of the native species and/or genomic extinction through hybridization. Such invasions often become serious before they are detected, posing unprecedented threats to biodiversity. A Japanese native strain of common carp (Cyprinus carpio) has become endangered owing to the invasion of non-native strains introduced from the Eurasian continent. Here, we propose a rapid environmental DNA-based approach to quantitatively monitor the invasion of non-native genotypes. Using this system, we developed a method to quantify the relative proportion of native and non-native DNA based on a single-nucleotide polymorphism using cycling probe technology in real-time PCR. The efficiency of this method was confirmed in aquarium experiments, where the quantified proportion of native and non-native DNA in the water was well correlated to the biomass ratio of native and non-native genotypes. This method provided quantitative estimates for the proportion of native and non-native DNA in natural rivers and reservoirs, which allowed us to estimate the degree of invasion of non-native genotypes without catching and analysing individual fish. Our approach would dramatically facilitate the process of quantitatively monitoring the invasion of non-native conspecifics in aquatic ecosystems, thus revealing a promising method for risk assessment and management in biodiversity conservation. © 2015 John Wiley & Sons Ltd.

  1. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  2. Correcting Gravitational Deformation at the Tianma Radio Telescope

    NASA Astrophysics Data System (ADS)

    Dong, Jian; Zhong, Weiye; Wang, Jinqing; Liu, Qinghui; Shen, Zhiqiang

    2018-04-01

    The primary reflector of the Tianma Radio Telescope (TMRT) distorts due to gravity, which dramatically reduces the aperture efficiency of high-frequency observations. A technique known as outof-focus holography (OOF) has been developed to measure gravitational deformation. However, the TMRT has a shaped dual-reflector optical system, so the OOF technique cannot be used directly. An extended OOF (e-OOF) technique that can be used for a shaped telescope is proposed. A new calculation method is developed to calculate the extra phase and illumination. A new measurement strategy is proposed that uses only one feed, reduces the length of the scan pattern, and allows the telescope to scan smoothly at low speed. At the TMRT, the time required for each measurement is under 20 min, the achieved accuracy is approximately 50 μm, and the repeatability is sufficient. We have acquired a model for the gravitational deformation of the TMRT. After applying the model, there is a 150%-400% improvement in the aperture efficiency at low and high elevations. The model flattens the gain curve between 15°-80° elevations with an aperture efficiency of approximately 52%. The final weighted root-mean-square error is approximately 270 μm. The e-OOF technique reduces the constraints on the telescopes.

  3. Deception and Cognitive Load: Expanding Our Horizon with a Working Memory Model

    PubMed Central

    Sporer, Siegfried L.

    2016-01-01

    Recently, studies on deception and its detection have increased dramatically. Many of these studies rely on the “cognitive load approach” as the sole explanatory principle to understand deception. These studies have been exclusively on lies about negative actions (usually lies of suspects of [mock] crimes). Instead, we need to re-focus more generally on the cognitive processes involved in generating both lies and truths, not just on manipulations of cognitive load. Using Baddeley’s (2000, 2007, 2012) working memory model, which integrates verbal and visual processes in working memory with retrieval from long-term memory and control of action, not only verbal content cues but also nonverbal, paraverbal, and linguistic cues can be investigated within a single framework. The proposed model considers long-term semantic, episodic and autobiographical memory and their connections with working memory and action. It also incorporates ironic processes of mental control (Wegner, 1994, 2009), the role of scripts and schemata and retrieval cues and retrieval processes. Specific predictions of the model are outlined and support from selective studies is presented. The model is applicable to different types of reports, particularly about lies and truths about complex events, and to different modes of production (oral, hand-written, typed). Predictions regarding several moderator variables and methods to investigate them are proposed. PMID:27092090

  4. Deception and Cognitive Load: Expanding Our Horizon with a Working Memory Model.

    PubMed

    Sporer, Siegfried L

    2016-01-01

    Recently, studies on deception and its detection have increased dramatically. Many of these studies rely on the "cognitive load approach" as the sole explanatory principle to understand deception. These studies have been exclusively on lies about negative actions (usually lies of suspects of [mock] crimes). Instead, we need to re-focus more generally on the cognitive processes involved in generating both lies and truths, not just on manipulations of cognitive load. Using Baddeley's (2000, 2007, 2012) working memory model, which integrates verbal and visual processes in working memory with retrieval from long-term memory and control of action, not only verbal content cues but also nonverbal, paraverbal, and linguistic cues can be investigated within a single framework. The proposed model considers long-term semantic, episodic and autobiographical memory and their connections with working memory and action. It also incorporates ironic processes of mental control (Wegner, 1994, 2009), the role of scripts and schemata and retrieval cues and retrieval processes. Specific predictions of the model are outlined and support from selective studies is presented. The model is applicable to different types of reports, particularly about lies and truths about complex events, and to different modes of production (oral, hand-written, typed). Predictions regarding several moderator variables and methods to investigate them are proposed.

  5. Dispersion Engineering of High-Q Silicon Microresonators via Thermal Oxidation - Postprint

    DTIC Science & Technology

    2014-03-12

    microresonators, which benefit from dramatic cavity enhancement, enables intriguing functionalities such as ultralow -threshold parametric oscillation9–11, octave...real- ization of a desired dispersion in practice is still a chal- lenging problem. In this paper, we propose and demon- strate a simple but powerful ...for broad applications of nonlinear parametric processes. To show the power of this technique, we applied it to achieve highly efficient photon-pair

  6. Measuring refractive index and volume of liquid under high pressure with optical coherence tomography and light microscopy.

    PubMed

    Wang, Donglin; Yang, Kun; Zhou, Yin

    2016-03-20

    Measuring the refractive index and volume of liquid under high pressure simultaneously is a big challenge. This paper proposed an alternative solution by combing optical coherence tomography with microscopy. An experiment for a feasibility study was carried out on polydimethylsiloxane liquid in a diamond anvil cell. The refractive index of the sample increased dramatically with pressure loaded, and the curve of pressure volume was also obtained.

  7. Selective Transfer Machine for Personalized Facial Action Unit Detection

    PubMed Central

    Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffery F.

    2014-01-01

    Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classifiers in all. PMID:25242877

  8. Learning-based adaptive prescribed performance control of postcapture space robot-target combination without inertia identifications

    NASA Astrophysics Data System (ADS)

    Wei, Caisheng; Luo, Jianjun; Dai, Honghua; Bian, Zilin; Yuan, Jianping

    2018-05-01

    In this paper, a novel learning-based adaptive attitude takeover control method is investigated for the postcapture space robot-target combination with guaranteed prescribed performance in the presence of unknown inertial properties and external disturbance. First, a new static prescribed performance controller is developed to guarantee that all the involved attitude tracking errors are uniformly ultimately bounded by quantitatively characterizing the transient and steady-state performance of the combination. Then, a learning-based supplementary adaptive strategy based on adaptive dynamic programming is introduced to improve the tracking performance of static controller in terms of robustness and adaptiveness only utilizing the input/output data of the combination. Compared with the existing works, the prominent advantage is that the unknown inertial properties are not required to identify in the development of learning-based adaptive control law, which dramatically decreases the complexity and difficulty of the relevant controller design. Moreover, the transient and steady-state performance is guaranteed a priori by designer-specialized performance functions without resorting to repeated regulations of the controller parameters. Finally, the three groups of illustrative examples are employed to verify the effectiveness of the proposed control method.

  9. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  10. Does the IMF vary with galaxy mass? The X-ray binary population of a key galaxy, NGC7457

    NASA Astrophysics Data System (ADS)

    Peacock, Mark

    2014-09-01

    We propose a 100ksec observation of NGC7457. The primary goal of this observation is to test for variations in the initial mass function (IMF). Many recent studies have proposed that the IMF varies systematically as a function of early-type galaxy mass. This has potentially dramatic consequences and must to be confirmed. The number of LMXBs in a galaxy (per stellar luminosity) can be used to provide an independent test of this hypothesis (see Peacock et al. 2014). Unfortunately, only galaxies with intermediate to high masses currently have the data needed to perform this test. The proposed observation of the elliptical galaxy NGC7457 will detect an order of magnitude more LMXBs in a low mass galaxy - hence providing the crucial constraint needed to significantly test for a variable IMF.

  11. Multiple opiate receptors: déjà vu all over again.

    PubMed

    Pasternak, Gavril W

    2004-01-01

    The concept of multiple opioid receptors has changed dramatically since their initial proposal by Martin nearly 40 years ago. Three major opioid receptor families have now been proposed: mu, kappa and delta. Most of the opioid analgesics used clinically selectively bind to mu opioid receptors. Yet, clinicians have long appreciated subtle, but significant, differences in their pharmacology. These observations suggested more than one mu opioid receptor mechanism of action and led us to propose multiple mu opioid receptors over 20 years ago based upon a range of pharmacological and receptor binding approaches. A mu opioid receptor, MOR-1, was cloned about a decade ago. More recent studies have now identified a number of splice variants of this clone. These splice variants may help explain the pharmacology of the mu opioids and open interesting directions for future opioid research.

  12. Real and virtual robotics in mathematics education at the school-university transition

    NASA Astrophysics Data System (ADS)

    Samuels, Peter; Haapasalo, Lenni

    2012-04-01

    LOGO and turtle graphics were an influential movement in primary school mathematics education in the 1980s and 1990s. Since then, technology has moved forward, both in terms of its sophistication and pedagogical potential; and learner experiences, preferences and ways of thinking have changed dramatically. Based on the authors' previous work and a literature review, this article revisits the subject of enhancing mathematics education through educational robotics kits and virtual robotic animations by proposing their simultaneous deployment at the school-university transition. The rationale for such an application is argued and an evaluation framework for these technologies is proposed. Two educational robotic kits and a virtual environment supporting robotic animations are evaluated both in terms of their feasibility of deployment and their educational effectiveness. Finally, the evaluation of learning experiences when deploying the proposed pedagogical approach is discussed.

  13. Chemically reduced graphene contains inherent metallic impurities present in parent natural and synthetic graphite.

    PubMed

    Ambrosi, Adriano; Chua, Chun Kiang; Khezri, Bahareh; Sofer, Zdeněk; Webster, Richard D; Pumera, Martin

    2012-08-07

    Graphene-related materials are in the forefront of nanomaterial research. One of the most common ways to prepare graphenes is to oxidize graphite (natural or synthetic) to graphite oxide and exfoliate it to graphene oxide with consequent chemical reduction to chemically reduced graphene. Here, we show that both natural and synthetic graphite contain a large amount of metallic impurities that persist in the samples of graphite oxide after the oxidative treatment, and chemically reduced graphene after the chemical reduction. We demonstrate that, despite a substantial elimination during the oxidative treatment of graphite samples, a significant amount of impurities associated to the chemically reduced graphene materials still remain and alter their electrochemical properties dramatically. We propose a method for the purification of graphenes based on thermal treatment at 1,000 °C in chlorine atmosphere to reduce the effect of such impurities on the electrochemical properties. Our findings have important implications on the whole field of graphene research.

  14. Chemically reduced graphene contains inherent metallic impurities present in parent natural and synthetic graphite

    PubMed Central

    Ambrosi, Adriano; Chua, Chun Kiang; Khezri, Bahareh; Sofer, Zdeněk; Webster, Richard D.; Pumera, Martin

    2012-01-01

    Graphene-related materials are in the forefront of nanomaterial research. One of the most common ways to prepare graphenes is to oxidize graphite (natural or synthetic) to graphite oxide and exfoliate it to graphene oxide with consequent chemical reduction to chemically reduced graphene. Here, we show that both natural and synthetic graphite contain a large amount of metallic impurities that persist in the samples of graphite oxide after the oxidative treatment, and chemically reduced graphene after the chemical reduction. We demonstrate that, despite a substantial elimination during the oxidative treatment of graphite samples, a significant amount of impurities associated to the chemically reduced graphene materials still remain and alter their electrochemical properties dramatically. We propose a method for the purification of graphenes based on thermal treatment at 1,000 °C in chlorine atmosphere to reduce the effect of such impurities on the electrochemical properties. Our findings have important implications on the whole field of graphene research. PMID:22826262

  15. A Novel Approach to Thermal Design of Solar Modules: Selective-Spectral and Radiative Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xingshu; Dubey, Rajiv; Chattopadhyay, Shashwata

    2016-11-21

    For commercial solar modules, up to 80% of the incoming sunlight may be dissipated as heat, potentially raising the temperature 20-30 degrees C higher than the ambient. In the long run, extreme self-heating may erode efficiency and shorten lifetime, thereby, dramatically reducing the total energy output by almost ~10% Therefore, it is critically important to develop effective and practical cooling methods to combat PV self-heating. In this paper, we explore two fundamental sources of PV self-heating, namely, sub-bandgap absorption and imperfect thermal radiation. The analysis suggests that we redesign the optical and thermal properties of the solar module to eliminatemore » the parasitic absorption (selective-spectral cooling) and enhance the thermal emission to the cold cosmos (radiative cooling). The proposed technique should cool the module by ~10 degrees C, to be reflected in significant long-term energy gain (~ 3% to 8% over 25 years) for PV systems under different climatic conditions.« less

  16. Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.

    PubMed

    Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M

    2015-06-12

    In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.

  17. A 7T Spine Array Based on Electric Dipole Transmitters

    PubMed Central

    Duan, Qi; Nair, Govind; Gudino, Natalia; de Zwart, Jacco A.; van Gelderen, Peter; Murphy-Boesch, Joe; Reich, Daniel S.; Duyn, Jeff H.; Merkle, Hellmut

    2015-01-01

    Purpose In this work the feasibility of using an array of electric dipole antennas for RF transmission in spine MRI at high field is explored. Method A 2-channel transmit array based on an electric dipole design was quantitatively optimized for 7T spine imaging and integrated with a receive array combining 8 loop coils. Using B1+ mapping, the transmit efficiency of the dipole array was compared to a design using quadrature loop pairs. The radio-frequency (RF) energy deposition for each array was measured using a home-built dielectric phantom and MR thermometry. The performance of the proposed array was qualitatively demonstrated in human studies. Results The results indicate dramatically improved transmit efficiency for the dipole design as compared to the loop excitation. Up to 76% gain was achieved within the spinal region. Conclusion For imaging of the spine, electric-dipole based transmitters provided an attractive alternative to the traditional loop-based design. Easy integration with existing receive array technology facilitates practical use at high field. PMID:26190585

  18. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xingshu; Silverman, Timothy J.; Zhou, Zhiguang

    For commercial one-sun solar modules, up to 80% of the incoming sunlight may be dissipated as heat, potentially raising the temperature 20-30 °C higher than the ambient. In the long term, extreme self-heating erodes efficiency and shortens lifetime, thereby dramatically reducing the total energy output. Therefore, it is critically important to develop effective and practical (and preferably passive) cooling methods to reduce operating temperature of photovoltaic (PV) modules. In this paper, we explore two fundamental (but often overlooked) origins of PV self-heating, namely, sub-bandgap absorption and imperfect thermal radiation. The analysis suggests that we redesign the optical properties of themore » solar module to eliminate parasitic absorption (selective-spectral cooling) and enhance thermal emission (radiative cooling). Comprehensive opto-electro-thermal simulation shows that the proposed techniques would cool one-sun terrestrial solar modules up to 10 °C. As a result, this self-cooling would substantially extend the lifetime for solar modules, with corresponding increase in energy yields and reduced levelized cost of electricity.« less

  20. Influence of the model's degree of freedom on human body dynamics identification.

    PubMed

    Maita, Daichi; Venture, Gentiane

    2013-01-01

    In fields of sports and rehabilitation, opportunities of using motion analysis of the human body have dramatically increased. To analyze the motion dynamics, a number of subject specific parameters and measurements are required. For example the contact forces measurement and the inertial parameters of each segment of the human body are necessary to compute the joint torques. In this study, in order to perform accurate dynamic analysis we propose to identify the inertial parameters of the human body and to evaluate the influence of the model's number of degrees of freedom (DoF) on the results. We use a method to estimate the inertial parameters without torque sensor, using generalized coordinates of the base link, joint angles and external forces information. We consider a 34DoF model, a 58DoF model, as well as the case when the human is manipulating a tool (here a tennis racket). We compare the obtained in results in terms of contact force estimation.

  1. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information.more » As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.« less

  2. A study on improvement of discharge characteristic by using a transformer in a capacitively coupled plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Cheol; Kim, Hyun-Jun; Lee, Hyo-Chang

    In a plasma discharge system, the power loss at powered line, matching network, and other transmission line can affect the discharge characteristics such as the power transfer efficiency, voltage and current at powered electrode, and plasma density. In this paper, we propose a method to reduce power loss by using a step down transformer mounted between the matching network and the powered electrode in a capacitively coupled argon plasma. This step down transformer decreases the power loss by reducing the current flowing through the matching network and transmission line. As a result, the power transfer efficiency was increased about 5%–10%more » by using a step down transformer. However, the plasma density was dramatically increased compared to no transformer. This can be understood by the increase in ohmic heating and the decrease in dc-self bias. By simply mounting a transformer, improvement of discharge efficiency can be achieved in capacitively coupled plasmas.« less

  3. Knowledge-based probabilistic representations of branching ratios in chemical networks: the case of dissociative recombinations.

    PubMed

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  4. 3D And 4D Cloud Lifecycle Investigations Using Innovative Scanning Radar Analysis Methods. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    2017-04-23

    With the vast upgrades to the ARM program radar measurement capabilities in 2010 and beyond, our ability to probe the 3D structure of clouds and associated precipitation has increased dramatically. This project build on the PI's and co-I's expertisein the analysis of radar observations. The first research thrust aims to document the 3D morphological (as depicted by the radar reflectivity structure) and 3D dynamical (cloud$-$scale eddies) structure of boundary layer clouds. Unraveling the 3D dynamical structure of stratocumulus and shallow cumulus clouds requires decomposition of the environmental wind contribution and particle sedimentation velocity from the observed radial Doppler velocity. Themore » second thrust proposes to unravel the mechanism of cumulus entrainment (location, scales) and its impact on microphysics utilizing radar measurements from the vertically pointing and new scanning radars at the ARM sites. The third research thrust requires the development of a cloud$-$tracking algorithm that monitors the properties of cloud.« less

  5. Signal enhancement and Patterson-search phasing for high-spatial-resolution coherent X-ray diffraction imaging of biological objects.

    PubMed

    Takayama, Yuki; Maki-Yonekura, Saori; Oroguchi, Tomotaka; Nakasako, Masayoshi; Yonekura, Koji

    2015-01-28

    In this decade coherent X-ray diffraction imaging has been demonstrated to reveal internal structures of whole biological cells and organelles. However, the spatial resolution is limited to several tens of nanometers due to the poor scattering power of biological samples. The challenge is to recover correct phase information from experimental diffraction patterns that have a low signal-to-noise ratio and unmeasurable lowest-resolution data. Here, we propose a method to extend spatial resolution by enhancing diffraction signals and by robust phasing. The weak diffraction signals from biological objects are enhanced by interference with strong waves from dispersed colloidal gold particles. The positions of the gold particles determined by Patterson analysis serve as the initial phase, and this dramatically improves reliability and convergence of image reconstruction by iterative phase retrieval. A set of calculations based on current experiments demonstrates that resolution is improved by a factor of two or more.

  6. Signal enhancement and Patterson-search phasing for high-spatial-resolution coherent X-ray diffraction imaging of biological objects

    PubMed Central

    Takayama, Yuki; Maki-Yonekura, Saori; Oroguchi, Tomotaka; Nakasako, Masayoshi; Yonekura, Koji

    2015-01-01

    In this decade coherent X-ray diffraction imaging has been demonstrated to reveal internal structures of whole biological cells and organelles. However, the spatial resolution is limited to several tens of nanometers due to the poor scattering power of biological samples. The challenge is to recover correct phase information from experimental diffraction patterns that have a low signal-to-noise ratio and unmeasurable lowest-resolution data. Here, we propose a method to extend spatial resolution by enhancing diffraction signals and by robust phasing. The weak diffraction signals from biological objects are enhanced by interference with strong waves from dispersed colloidal gold particles. The positions of the gold particles determined by Patterson analysis serve as the initial phase, and this dramatically improves reliability and convergence of image reconstruction by iterative phase retrieval. A set of calculations based on current experiments demonstrates that resolution is improved by a factor of two or more. PMID:25627480

  7. Electrochemical Evaluation of trans-Resveratrol Levels in Red Wine Based on the Interaction between Resveratrol and Graphene.

    PubMed

    Liu, Lantao; Zhou, Yanli; Kang, Yiyu; Huang, Haihong; Li, Congming; Xu, Maotian; Ye, Baoxian

    2017-01-01

    trans -Resveratrol is often considered as one of the quality standards of red wine, and the development of a sensitive and reliable method for monitoring the trans -resveratrol levels in red wine is an urgent requirement for the quality control. Here, a novel voltammetric approach was described for probing trans -resveratrol using a graphene-modified glassy carbon (GC) electrode. The proposed electrode was prepared by one-step electrodeposition of reduced graphene oxide (ERGO) at a GC electrode. Compared with the bare GC electrode, the introduced graphene film on the electrode surface dramatically improved the sensitivity of the sensor response due to the π - π interaction between the graphene and trans -resveratrol. The developed sensor exhibited low detection limit of 0.2  μ M with wide linear range of 0.8-32  μ M and high stability. For the analysis of trans -resveratrol in red wine, the high anti-interference ability and the good recoveries indicated the great potential for practical applications.

  8. Electrochemical Evaluation of trans-Resveratrol Levels in Red Wine Based on the Interaction between Resveratrol and Graphene

    PubMed Central

    Liu, Lantao; Kang, Yiyu; Huang, Haihong; Li, Congming; Ye, Baoxian

    2017-01-01

    trans-Resveratrol is often considered as one of the quality standards of red wine, and the development of a sensitive and reliable method for monitoring the trans-resveratrol levels in red wine is an urgent requirement for the quality control. Here, a novel voltammetric approach was described for probing trans-resveratrol using a graphene-modified glassy carbon (GC) electrode. The proposed electrode was prepared by one-step electrodeposition of reduced graphene oxide (ERGO) at a GC electrode. Compared with the bare GC electrode, the introduced graphene film on the electrode surface dramatically improved the sensitivity of the sensor response due to the π-π interaction between the graphene and trans-resveratrol. The developed sensor exhibited low detection limit of 0.2 μM with wide linear range of 0.8–32 μM and high stability. For the analysis of trans-resveratrol in red wine, the high anti-interference ability and the good recoveries indicated the great potential for practical applications. PMID:28819581

  9. Treatment of Adolescent Substance Use Disorders and Co-Occurring Internalizing Disorders: A Critical Review and Proposed Model

    PubMed Central

    Hulvershorn, Leslie A.; Quinn, Patrick D.; Scott, Eric L.

    2016-01-01

    Background The past several decades have seen dramatic growth in empirically supported treatments for adolescent substance use disorders (SUDs), yet even the most well-established approaches struggle to produce large or long-lasting improvements. These difficulties may stem, in part, from the high rates of comorbidity between SUDs and other psychiatric disorders. Method We critically reviewed the treatment outcome literature for adolescents with co-occurring SUDs and internalizing disorders. Results Our review identified components of existing treatments that might be included in an integrated, evidence-based approach to the treatment of SUDs and internalizing disorders. An effective program may involve careful assessment, inclusion of parents or guardians, and tailoring of interventions via a modular strategy. Conclusions The existing literature guides the development of a conceptual evidence-based, modular treatment model targeting adolescents with co-occurring internalizing and SUDs. With empirical study, such a model may better address treatment outcomes for both disorder types in adolescents. PMID:25973718

  10. Nonlinear dynamics of a rack-pinion-rack device powered by the Casimir force.

    PubMed

    Miri, MirFaez; Nekouie, Vahid; Golestanian, Ramin

    2010-01-01

    Using the lateral Casimir force-a manifestation of the quantum fluctuations of the electromagnetic field between objects with corrugated surfaces-as the main force transduction mechanism, a nanomechanical device with rich dynamical behaviors is proposed. The device is made of two parallel racks that are moving in the same direction and a pinion in the middle that couples with both racks via the noncontact lateral Casimir force. The built-in frustration in the device causes it to be very sensitive and react dramatically to minute changes in the geometrical parameters and initial conditions of the system. The noncontact nature of the proposed device could help with the ubiquitous wear problem in nanoscale mechanical systems.

  11. Unbiased methods for removing systematics from galaxy clustering measurements

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.

    2016-02-01

    Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.

  12. Hubless satellite communications networks

    NASA Technical Reports Server (NTRS)

    Robinson, Peter Alan

    1994-01-01

    Frequency Comb Multiple Access (FCMA) is a new combined modulation and multiple access method which will allow cheap hubless Very Small Aperture Terminal (VSAT) networks to be constructed. Theoretical results show bandwidth efficiency and power efficiency improvements over other modulation and multiple access methods. Costs of the VSAT network are reduced dramatically since a hub station is not required.

  13. Changing Images of the Inclined Plane: A Case Study of a Revolution in American Science Education

    ERIC Educational Resources Information Center

    Turner, Steven C.

    2012-01-01

    Between 1880 and 1920 the way science was taught in American High Schools changed dramatically. The old "lecture/demonstration" method, where information was presented to essentially passive students, was replaced by the "laboratory" method, where students performed their own experiments in specially constructed student laboratories. National…

  14. Student Diversity Requires Different Approaches to College Teaching, Even in Math and Science.

    ERIC Educational Resources Information Center

    Nelson, Craig E.

    1996-01-01

    Asserts that traditional teaching methods are unintentionally biased towards the elite and against many non-traditional students. Outlines several easily accessible changes in teaching methods that have fostered dramatic changes in student performance with no change in standards. These approaches have proven effective even in the fields of…

  15. Rainstorms able to induce flash floods in a Mediterranean-climate region (Calabria, southern Italy)

    NASA Astrophysics Data System (ADS)

    Terranova, O. G.; Gariano, S. L.

    2014-03-01

    Heavy rainstorms often induce flash flooding, one of the natural disasters most responsible for damage to man-made infrastructure and loss of lives, adversely affecting also the opportunities for socio-economic development of Mediterranean Countries. The frequently dramatic damage of flash floods are often detected with sufficient accuracy by post-event surveys, but rainfall causing them are still only roughly characterized. With the aim of improving the understanding of the temporal structure and spatial distribution of heavy rainstorms in the Mediterranean context, a statistical analysis was carried out in Calabria (southern Italy) concerning rainstorms that mainly induced flash floods, but also shallow landslides and debris-flows. Thus a method is proposed - based on the overcoming of heuristically predetermined threshold values of cumulated rainfall, maximum intensity, and kinetic energy of the rainfall event - to select and characterize the rainstorms able to induce flash floods in the Mediterranean-climate Countries. Therefore the obtained (heavy) rainstorms were automatically classified and studied according to their structure in time, localization and extension. Rainfall-runoff watershed models can consequently benefit from the enhanced identification of design storms, with a realistic time structure integrated with the results of the spatial analysis. A survey of flash flood events recorded in the last decades provides a preliminary validation of the method proposed to identify the heavy rainstorms and synthetically describe their characteristics. The notable size of the employed sample, including data with a very detailed resolution in time, that relate to several rain gauges well-distributed throughout the region, give robustness to the obtained results.

  16. Rainstorms able to induce flash floods in a Mediterranean-climate region (Calabria, southern Italy)

    NASA Astrophysics Data System (ADS)

    Terranova, O. G.; Gariano, S. L.

    2014-09-01

    Heavy rainstorms often induce flash flooding, one of the natural disasters most responsible for damage to man-made infrastructures and loss of lives, also adversely affecting the opportunities for socio-economic development of Mediterranean countries. The frequently dramatic damage of flash floods are often detected, with sufficient accuracy, by post-event surveys, but rainfall causing them are still only roughly characterized. With the aim of improving the understanding of the temporal structure and spatial distribution of heavy rainstorms in the Mediterranean context, a statistical analysis was carried out in Calabria (southern Italy) concerning rainstorms that mainly induced flash floods, but also shallow landslides and debris flows. Thus, a method is proposed - based on the overcoming of heuristically predetermined threshold values of cumulated rainfall, maximum intensity, and kinetic energy of the rainfall event - to select and characterize the rainstorms able to induce flash floods in the Mediterranean-climate countries. Therefore, the obtained (heavy) rainstorms were automatically classified and studied according to their structure in time, localization, and extension. Rainfall-runoff watershed models can consequently benefit from the enhanced identification of design storms, with a realistic time structure integrated with the results of the spatial analysis. A survey of flash flood events recorded in the last decades provides a preliminary validation of the method proposed to identify the heavy rainstorms and synthetically describe their characteristics. The notable size of the employed sample, including data with a very detailed resolution in time that relate to several rain gauges well-distributed throughout the region, gives robustness to the obtained results.

  17. A method for accelerated trait conversion in plant breeding.

    PubMed

    Lewis, Ramsey S; Kernodle, S P

    2009-05-01

    Backcrossing is often used in cultivar development to transfer one or a few genes to desired genetic backgrounds. The duration necessary to complete such 'trait conversions' is largely dependent upon generation times. Constitutive overexpression of the Arabidopsis thaliana gene FT (FLOWERING LOCUS T) induces early-flowering in many plants. Here, we used tobacco (Nicotiana tabacum L.) as a model system to propose and examine aspects of a modified backcross procedure where transgenic FT overexpression is used to reduce generation time and accelerate gene transfer. In this method, the breeder would select for an FT transgene insertion and the trait(s) of interest at each backcross generation except the last. In the final generation, selection would be conducted for the trait(s) of interest, but against FT, to generate the backcross-derived trait conversion. We demonstrate here that constitutive FT overexpression functions to dramatically reduce days-to-flower similarly in diverse tobacco genetic backgrounds. FT-containing plants flowered in an average of 39 days, in comparison with 87-138 days for non-FT plants. Two FT transgene insertions were found to segregate independently of several disease resistance genes often the focus of backcrossing in tobacco. In addition, no undesirable epigenetic effects on flowering time were observed once FT was segregated away. The proposed system would reduce the time required to complete a trait conversion in tobacco by nearly one-half. These features suggest the possible value of this modified backcrossing system for tobacco or other crop species where long generation times or photoperiod sensitivity may impede timely trait conversion.

  18. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  19. Date canning: a new approach for the long time preservation of date.

    PubMed

    Homayouni, Aziz; Azizi, Aslan; Keshtiban, Ata Khodavirdivand; Amini, Amir; Eslami, Ahad

    2015-04-01

    Dramatic growth in date (Phoenix dactylifera L.) production, makes it clear to apply proper methods to preserve this nutritious fruit for a long time. Numerous methods have been used to gain this goal in recent years that can be classified into non-thermal (fumigation, ozonation, irradiation, and packaging) and thermal (heat treatment, cold storage, dehydration, jam etc.) processing methods. In this paper these methods were reviewed and novel methods for date preservation were presented.

  20. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  1. Discovery of aminofurazan-azabenzimidazoles as inhibitors of Rho-kinase with high kinase selectivity and antihypertensive activity.

    PubMed

    Stavenger, Robert A; Cui, Haifeng; Dowdell, Sarah E; Franz, Robert G; Gaitanopoulos, Dimitri E; Goodman, Krista B; Hilfiker, Mark A; Ivy, Robert L; Leber, Jack D; Marino, Joseph P; Oh, Hye-Ja; Viet, Andrew Q; Xu, Weiwei; Ye, Guosen; Zhang, Daohua; Zhao, Yongdong; Jolivette, Larry J; Head, Martha S; Semus, Simon F; Elkins, Patricia A; Kirkpatrick, Robert B; Dul, Edward; Khandekar, Sanjay S; Yi, Tracey; Jung, David K; Wright, Lois L; Smith, Gary K; Behm, David J; Doe, Christopher P; Bentley, Ross; Chen, Zunxuan X; Hu, Erding; Lee, Dennis

    2007-01-11

    The discovery, proposed binding mode, and optimization of a novel class of Rho-kinase inhibitors are presented. Appropriate substitution on the 6-position of the azabenzimidazole core provided subnanomolar enzyme potency in vitro while dramatically improving selectivity over a panel of other kinases. Pharmacokinetic data was obtained for the most potent and selective examples and one (6n) has been shown to lower blood pressure in a rat model of hypertension.

  2. Drama Education in New Zealand: A Coming of Age? A Conceptualisation of the Development and Practice of Drama in the Curriculum as a Structured Improvisation, with New Zealand's Experience as a Case Study

    ERIC Educational Resources Information Center

    Greenwood, Janinka

    2009-01-01

    I propose a conceptualisation of drama in school education as improvisation within a framework that has a number of fixed but changing structures. I examine how the "drama in schooling" practice of one country, New Zealand, might be seen as a group improvisation in which, through dramatic negotiation, participants evolve their goals,…

  3. Skirt clouds associated with the soufriere eruption of 17 april 1979.

    PubMed

    Barr, S

    1982-06-04

    A fortuitous and dramatic photograph of the Soufriere eruption column of 17 April 1979 displays a series of highly structured skirt clouds. The gentle distortion of thin, quasi-horizontal layers of moist air has been documented in meteorological situations. It is proposed that at St. Vincent subhorizontal layers of moist air were intensely deformed by the rapidly rising eruption column and were carried to higher altitudes, where they condensed to form the skirt clouds.

  4. The Role of the Omental Microenvironment in Ovarian Cancer Metastatic Colonization

    DTIC Science & Technology

    2010-08-01

    Cancer cells which have adhered to the surface of the omentum stain positively for this proliferative marker , confirming their viability under our ex...6 mice (Figure 1). Due to technical difficulties, we had to change some of the proposed immune cell markers for the FACS analysis. We are currently...for F4/80 (macrophage cell marker ) was even more dramatic. Representative data from the macrophage localization in milky spots after i.p injection of

  5. The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation.

    PubMed

    Ritschl, Ludwig; Kuntz, Jan; Fleischmann, Christof; Kachelrieß, Marc

    2016-05-01

    In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled data set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.

  6. A novel method for surface defect inspection of optic cable with short-wave infrared illuminance

    NASA Astrophysics Data System (ADS)

    Chen, Xiaohong; Liu, Ning; You, Bo; Xiao, Bin

    2016-07-01

    Intelligent on-line detection of cable quality is a crucial issue in optic cable factory, and defects on the surface of optic cable can dramatically depress cable grade. Manual inspection in optic cable quality cannot catch up with the development of optic cable industry due to its low detection efficiency and huge human cost. Therefore, real-time is highly demanded by industry in order to replace the subjective and repetitive process of manual inspection. For this reason, automatic cable defect inspection has been a trend. In this paper, a novel method for surface defect inspection of optic cable with short-wave infrared illuminance is presented. The special condition of short-wave infrared cannot only provide illumination compensation for the weak illumination environment, but also can avoid the problem of exposure when using visible light illuminance, which affects the accuracy of inspection algorithm. A series of image processing algorithms are set up to analyze cable image for the verification of real-time and veracity of the detection method. Unlike some existing detection algorithms which concentrate on the characteristics of defects with an active search way, the proposed method removes the non-defective areas of the image passively at the same time of image processing, which reduces a large amount of computation. OTSU algorithm is used to convert the gray image to the binary image. Furthermore, a threshold window is designed to eliminate the fake defects, and the threshold represents the considered minimum size of defects ε . Besides, a new regional suppression method is proposed to deal with the edge burrs of the cable, which shows the superior performance compared with that of Open-Close operation of mathematical morphological in the boundary processing. Experimental results of 10,000 samples show that the rates of miss detection and false detection are 2.35% and 0.78% respectively when ε equals to 0.5 mm, and the average processing period of one frame image is 2.39 ms. All the improvements have been verified in the paper to show the ability of our inspection method for optic cable.

  7. A Move Towards Postmethod Pedagogy in the Iranian EFL Context: Panacea or More Pain?

    ERIC Educational Resources Information Center

    Safari, Parvin; Rashidi, Nasser

    2015-01-01

    Kumaravadivelu's (2003) introduction and development of "postmethod" led to the demise of methods and a dramatic change in the language teaching profession. In fact, it can be claimed that the arrival of postmethod pedagogy in language teaching might be the reason for the abandonment and replacement of method by context sensitive,…

  8. Geothermal Technology: A Smart Way to Lower Energy Bills

    ERIC Educational Resources Information Center

    Calahan, Scott

    2007-01-01

    Heating costs for both natural gas and oil have risen dramatically in recent years--and will likely continue to do so. Consequently, it is important that students learn not only about traditional heating technology, but also about the alternative methods that will surely grow in use in the coming years. One such method is geothermal. In this…

  9. A Spiral Step-by-Step Educational Method for Cultivating Competent Embedded System Engineers to Meet Industry Demands

    ERIC Educational Resources Information Center

    Jing,Lei; Cheng, Zixue; Wang, Junbo; Zhou, Yinghui

    2011-01-01

    Embedded system technologies are undergoing dramatic change. Competent embedded system engineers are becoming a scarce resource in the industry. Given this, universities should revise their specialist education to meet industry demands. In this paper, a spirally tight-coupled step-by-step educational method, based on an analysis of industry…

  10. A multilevel reuse system with source separation process for printing and dyeing wastewater treatment: A case study.

    PubMed

    Wang, Rui; Jin, Xin; Wang, Ziyuan; Gu, Wantao; Wei, Zhechao; Huang, Yuanjie; Qiu, Zhuang; Jin, Pengkang

    2018-01-01

    This paper proposes a new system of multilevel reuse with source separation in printing and dyeing wastewater (PDWW) treatment in order to dramatically improve the water reuse rate to 35%. By analysing the characteristics of the sources and concentrations of pollutants produced in different printing and dyeing processes, special, highly, and less contaminated wastewaters (SCW, HCW, and LCW, respectively) were collected and treated separately. Specially, a large quantity of LCW was sequentially reused at multiple levels to meet the water quality requirements for different production processes. Based on this concept, a multilevel reuse system with a source separation process was established in a typical printing and dyeing enterprise. The water reuse rate increased dramatically to 62%, and the reclaimed water was reused in different printing and dyeing processes based on the water quality. This study provides promising leads in water management for wastewater reclamation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    PubMed

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  12. Disruptive Influences on Research in Academic Pathology Departments: Proposed Changes to the Common Rule Governing Informed Consent for Research Use of Biospecimens and to Rules Governing Return of Research Results.

    PubMed

    Sobel, Mark E; Dreyfus, Jennifer C

    2017-01-01

    Academic pathology departments will be dramatically affected by proposed United States federal government regulatory initiatives. Pathology research will be substantially altered if proposed changes to the Common Rule (Code of Federal Regulations: Protection of Human Subjects title 45 CFR 46) and regulations governing the return of individual research results are approved and finalized, even more so now that the Precision Medicine initiative has been launched. Together, these changes are disruptive influences on academic pathology research as we know it, straining limited resources and compromising advances in diagnostic and academic pathology. Academic research pathologists will be challenged over the coming years and must demonstrate leadership to ensure the continued availability of and the ethical use of research pathology specimens. Copyright © 2017 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.

  13. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  14. Assessment of Orbital-Optimized MP2.5 for Thermochemistry and Kinetics: Dramatic Failures of Standard Perturbation Theory Approaches for Aromatic Bond Dissociation Energies and Barrier Heights of Radical Reactions.

    PubMed

    Soydaş, Emine; Bozkaya, Uğur

    2015-04-14

    An assessment of orbital-optimized MP2.5 (OMP2.5) [ Bozkaya, U.; Sherrill, C. D. J. Chem. Phys. 2014, 141, 204105 ] for thermochemistry and kinetics is presented. The OMP2.5 method is applied to closed- and open-shell reaction energies, barrier heights, and aromatic bond dissociation energies. The performance of OMP2.5 is compared with that of the MP2, OMP2, MP2.5, MP3, OMP3, CCSD, and CCSD(T) methods. For most of the test sets, the OMP2.5 method performs better than MP2.5 and CCSD, and provides accurate results. For barrier heights of radical reactions and aromatic bond dissociation energies OMP2.5-MP2.5, OMP2-MP2, and OMP3-MP3 differences become obvious. Especially, for aromatic bond dissociation energies, standard perturbation theory (MP) approaches dramatically fail, providing mean absolute errors (MAEs) of 22.5 (MP2), 17.7 (MP2.5), and 12.8 (MP3) kcal mol(-1), while the MAE values of the orbital-optimized counterparts are 2.7, 2.4, and 2.4 kcal mol(-1), respectively. Hence, there are 5-8-folds reductions in errors when optimized orbitals are employed. Our results demonstrate that standard MP approaches dramatically fail when the reference wave function suffers from the spin-contamination problem. On the other hand, the OMP2.5 method can reduce spin-contamination in the unrestricted Hartree-Fock (UHF) initial guess orbitals. For overall evaluation, we conclude that the OMP2.5 method is very helpful not only for challenging open-shell systems and transition-states but also for closed-shell molecules. Hence, one may prefer OMP2.5 over MP2.5 and CCSD as an O(N(6)) method, where N is the number of basis functions, for thermochemistry and kinetics. The cost of the OMP2.5 method is comparable with that of CCSD for energy computations. However, for analytic gradient computations, the OMP2.5 method is only half as expensive as CCSD.

  15. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  16. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Effect of Annealing Conditions on Properties of Sol-Gel Derived Al-Doped ZnO Thin Films

    NASA Astrophysics Data System (ADS)

    Gao, Mei-Zhen; Zhang, Feng; Liu, Jing; Sun, Hui-Na

    2009-08-01

    Transparent conductive Al-doped ZnO (AZO) thin films are prepared on normal glass substrates by the sol-gel spin coating method. The effects of drying conditions, annealing temperature and cooling rate on the structural, electrical and optical properties of AZO films are investigated by x-ray diffraction, scanning electron microscopy, the four-point probe method and UV-VIS spectrophotometry, respectively. The deposited films show a hexagonal wurtzite structure and high preferential c-axis orientation. As the drying temperature increases from 100°C to 300°C the resistivity of AZO films decreases dramatically. In contrast to the annealed films cooled in a furnace and in air, the resistivity of the annealed film which is cooled at -15°C is greatly reduced. Increasing the cooling rate dramatically increases the electrical conductivity of AZO films.

  17. Evaluation of the significance of abrupt changes in precipitation and runoff process in China

    NASA Astrophysics Data System (ADS)

    Xie, Ping; Wu, Ziyi; Sang, Yan-Fang; Gu, Haiting; Zhao, Yuxi; Singh, Vijay P.

    2018-05-01

    Abrupt changes are an important manifestation of hydrological variability. How to accurately detect the abrupt changes in hydrological time series and evaluate their significance is an important issue, but methods for dealing with them effectively are lacking. In this study, we propose an approach to evaluate the significance of abrupt changes in time series at five levels: no, weak, moderate, strong, and dramatic. The approach was based on an index of correlation coefficient calculated for the original time series and its abrupt change component. A bigger value of correlation coefficient reflects a higher significance level of abrupt change. Results of Monte-Carlo experiments verified the reliability of the proposed approach, and also indicated the great influence of statistical characteristics of time series on the significance level of abrupt change. The approach was derived from the relationship between correlation coefficient index and abrupt change, and can estimate and grade the significance levels of abrupt changes in hydrological time series. Application of the proposed approach to ten major watersheds in China showed that abrupt changes mainly occurred in five watersheds in northern China, which have arid or semi-arid climate and severe shortages of water resources. Runoff processes in northern China were more sensitive to precipitation change than those in southern China. Although annual precipitation and surface water resources amount (SWRA) exhibited a harmonious relationship in most watersheds, abrupt changes in the latter were more significant. Compared with abrupt changes in annual precipitation, human activities contributed much more to the abrupt changes in the corresponding SWRA, except for the Northwest Inland River watershed.

  18. Color management in the real world: sRGB, ICM2, ICC, ColorSync, and other attempts to make color management transparent

    NASA Astrophysics Data System (ADS)

    Stokes, Michael

    1998-07-01

    A uniformly adopted color standards infrastructure has a dramatic impact on any color imaging industry and technology. This presentation begins by framing the current color standards situation in a historical context. A series of similar appearing infrastructure adoptions in color publishing during the last fifty years are reviewed and compared to the current events. This historical review is followed by brief technical, business and marketing reviews of two of the more popular recent color standards proposals, sRGB and ICC, along with their operating system implementations in the Microsoft and Apple operating systems. The paper concludes with a summary of Hewlett- Packard Company's and Microsoft's proposed future direction.

  19. Coherent Pound-Drever-Hall technique for high resolution fiber optic strain sensor at very low light power

    NASA Astrophysics Data System (ADS)

    Wu, Mengxin; Liu, Qingwen; Chen, Jiageng; He, Zuyuan

    2017-04-01

    Pound-Drever-Hall (PDH) technique has been widely adopted for ultrahigh resolution fiber-optic sensors, but its performance degenerates seriously as the light power drops. To solve this problem, we developed a coherent PDH technique for weak optical signal detection, with which the signal-to-noise ratio (SNR) of demodulated PDH signal is dramatically improved. In the demonstrational experiments, a high resolution fiber-optic sensor using the proposed technique is realized, and n"-order strain resolution at a low light power down to -43 dBm is achieved, which is about 15 dB lower compared with classical PDH technique. The proposed coherent PDH technique has great potentials in longer distance and larger scale sensor networks.

  20. Reduced dimensionality (3,2)D NMR experiments and their automated analysis: implications to high-throughput structural studies on proteins.

    PubMed

    Reddy, Jithender G; Kumar, Dinesh; Hosur, Ramakrishna V

    2015-02-01

    Protein NMR spectroscopy has expanded dramatically over the last decade into a powerful tool for the study of their structure, dynamics, and interactions. The primary requirement for all such investigations is sequence-specific resonance assignment. The demand now is to obtain this information as rapidly as possible and in all types of protein systems, stable/unstable, soluble/insoluble, small/big, structured/unstructured, and so on. In this context, we introduce here two reduced dimensionality experiments – (3,2)D-hNCOcanH and (3,2)D-hNcoCAnH – which enhance the previously described 2D NMR-based assignment methods quite significantly. Both the experiments can be recorded in just about 2-3 h each and hence would be of immense value for high-throughput structural proteomics and drug discovery research. The applicability of the method has been demonstrated using alpha-helical bovine apo calbindin-D9k P43M mutant (75 aa) protein. Automated assignment of this data using AUTOBA has been presented, which enhances the utility of these experiments. The backbone resonance assignments so derived are utilized to estimate secondary structures and the backbone fold using Web-based algorithms. Taken together, we believe that the method and the protocol proposed here can be used for routine high-throughput structural studies of proteins. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Ultrasonic Fingerprinting of Structural Materials: Spent Nuclear Fuel Containers Case-Study

    NASA Astrophysics Data System (ADS)

    Sednev, D.; Lider, A.; Demyanuk, D.; Kroening, M.; Salchak, Y.

    Nowadays, NDT is mainly focused on safety purposes, but it seems possible to apply those methods to provide national and IAEA safeguards. The containment of spent fuel in storage casks could be dramatically improved in case of development of so-called "smart" spent fuel storage and transfer casks. Such casks would have tamper indicating and monitoring/tracking features integrated directly into the cask design. The microstructure of the containers material as well as of the dedicated weld seam is applied to the lid and the cask body and provides a unique fingerprint of the full container, which can be reproducibly scanned by using an appropriate technique. The echo-sounder technique, which is the most commonly used method for material inspection, was chosen for this project. The main measuring parameter is acoustic noise, reflected from material's artefacts. The purpose is to obtain structural fingerprinting. Reference measurement and additional measurement results were compared. Obtained results have verified the appliance of structural fingerprint and the chosen control method. The successful authentication demonstrates the levels of the feature points' compliance exceeding the given threshold which differs considerably from the percentage of the concurrent points during authentication from other points. Since reproduction or doubling of the proposed unique identification characteristics is impossible at the current state science and technology, application of this technique is considered to identify the interference into the nuclear materials displacement with high accuracy.

  2. Electric field measurement in microwave discharge ion thruster with electro-optic probe.

    PubMed

    Ise, Toshiyuki; Tsukizaki, Ryudo; Togo, Hiroyoshi; Koizumi, Hiroyuki; Kuninaka, Hitoshi

    2012-12-01

    In order to understand the internal phenomena in a microwave discharge ion thruster, it is important to measure the distribution of the microwave electric field inside the discharge chamber, which is directly related to the plasma production. In this study, we proposed a novel method of measuring a microwave electric field with an electro-optic (EO) probe based on the Pockels effect. The probe, including a cooling system, contains no metal and can be accessed in the discharge chamber with less disruption to the microwave distribution. This method enables measurement of the electric field profile under ion beam acceleration. We first verified the measurement with the EO probe by a comparison with a finite-difference time domain numerical simulation of the microwave electric field in atmosphere. Second, we showed that the deviations of the reflected microwave power and the beam current were less than 8% due to inserting the EO probe into the ion thruster under ion beam acceleration. Finally, we successfully demonstrated the measurement of the electric-field profile in the ion thruster under ion beam acceleration. These measurements show that the electric field distribution in the thruster dramatically changes in the ion thruster under ion beam acceleration as the propellant mass flow rate increases. These results indicate that this new method using an EO probe can provide a useful guide for improving the propulsion of microwave discharge ion thrusters.

  3. Phase Transitions in a Model for Social Learning via the Internet

    NASA Astrophysics Data System (ADS)

    Bordogna, Clelia M.; Albano, Ezequiel V.

    Based on the concepts of educational psychology, sociology and statistical physics, a mathematical model for a new type of social learning process that takes place when individuals interact via the Internet is proposed and studied. The noise of the interaction (misunderstandings, lack of well organized participative activities, etc.) dramatically restricts the number of individuals that can be efficiently in mutual contact and drives phase transitions between ``ordered states'' such as the achievements of the individuals are satisfactory and ``disordered states'' with negligible achievements.

  4. Pricing foreign equity option with stochastic volatility

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Xu, Weidong

    2015-11-01

    In this paper we propose a general foreign equity option pricing framework that unifies the vast foreign equity option pricing literature and incorporates the stochastic volatility into foreign equity option pricing. Under our framework, the time-changed Lévy processes are used to model the underlying assets price of foreign equity option and the closed form pricing formula is obtained through the use of characteristic function methodology. Numerical tests indicate that stochastic volatility has a dramatic effect on the foreign equity option prices.

  5. a Landsat Time-Series Stacks Model for Detection of Cropland Change

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, J.; Zhang, J.

    2017-09-01

    Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.

  6. Bayesian probabilistic population projections for all countries

    PubMed Central

    Raftery, Adrian E.; Li, Nan; Ševčíková, Hana; Gerland, Patrick; Heilig, Gerhard K.

    2012-01-01

    Projections of countries’ future populations, broken down by age and sex, are widely used for planning and research. They are mostly done deterministically, but there is a widespread need for probabilistic projections. We propose a Bayesian method for probabilistic population projections for all countries. The total fertility rate and female and male life expectancies at birth are projected probabilistically using Bayesian hierarchical models estimated via Markov chain Monte Carlo using United Nations population data for all countries. These are then converted to age-specific rates and combined with a cohort component projection model. This yields probabilistic projections of any population quantity of interest. The method is illustrated for five countries of different demographic stages, continents and sizes. The method is validated by an out of sample experiment in which data from 1950–1990 are used for estimation, and applied to predict 1990–2010. The method appears reasonably accurate and well calibrated for this period. The results suggest that the current United Nations high and low variants greatly underestimate uncertainty about the number of oldest old from about 2050 and that they underestimate uncertainty for high fertility countries and overstate uncertainty for countries that have completed the demographic transition and whose fertility has started to recover towards replacement level, mostly in Europe. The results also indicate that the potential support ratio (persons aged 20–64 per person aged 65+) will almost certainly decline dramatically in most countries over the coming decades. PMID:22908249

  7. Amoeba-Inspired Heuristic Search Dynamics for Exploring Chemical Reaction Paths.

    PubMed

    Aono, Masashi; Wakabayashi, Masamitsu

    2015-09-01

    We propose a nature-inspired model for simulating chemical reactions in a computationally resource-saving manner. The model was developed by extending our previously proposed heuristic search algorithm, called "AmoebaSAT [Aono et al. 2013]," which was inspired by the spatiotemporal dynamics of a single-celled amoeboid organism that exhibits sophisticated computing capabilities in adapting to its environment efficiently [Zhu et al. 2013]. AmoebaSAT is used for solving an NP-complete combinatorial optimization problem [Garey and Johnson 1979], "the satisfiability problem," and finds a constraint-satisfying solution at a speed that is dramatically faster than one of the conventionally known fastest stochastic local search methods [Iwama and Tamaki 2004] for a class of randomly generated problem instances [ http://www.cs.ubc.ca/~hoos/5/benchm.html ]. In cases where the problem has more than one solution, AmoebaSAT exhibits dynamic transition behavior among a variety of the solutions. Inheriting these features of AmoebaSAT, we formulate "AmoebaChem," which explores a variety of metastable molecules in which several constraints determined by input atoms are satisfied and generates dynamic transition processes among the metastable molecules. AmoebaChem and its developed forms will be applied to the study of the origins of life, to discover reaction paths for which expected or unexpected organic compounds may be formed via unknown unstable intermediates and to estimate the likelihood of each of the discovered paths.

  8. A SPATIOTEMPORAL APPROACH FOR HIGH RESOLUTION TRAFFIC FLOW IMPUTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Lee; Chin, Shih-Miao; Hwang, Ho-Ling

    Along with the rapid development of Intelligent Transportation Systems (ITS), traffic data collection technologies have been evolving dramatically. The emergence of innovative data collection technologies such as Remote Traffic Microwave Sensor (RTMS), Bluetooth sensor, GPS-based Floating Car method, automated license plate recognition (ALPR) (1), etc., creates an explosion of traffic data, which brings transportation engineering into the new era of Big Data. However, despite the advance of technologies, the missing data issue is still inevitable and has posed great challenges for research such as traffic forecasting, real-time incident detection and management, dynamic route guidance, and massive evacuation optimization, because themore » degree of success of these endeavors depends on the timely availability of relatively complete and reasonably accurate traffic data. A thorough literature review suggests most current imputation models, if not all, focus largely on the temporal nature of the traffic data and fail to consider the fact that traffic stream characteristics at a certain location are closely related to those at neighboring locations and utilize these correlations for data imputation. To this end, this paper presents a Kriging based spatiotemporal data imputation approach that is able to fully utilize the spatiotemporal information underlying in traffic data. Imputation performance of the proposed approach was tested using simulated scenarios and achieved stable imputation accuracy. Moreover, the proposed Kriging imputation model is more flexible compared to current models.« less

  9. Quantum communication and information processing

    NASA Astrophysics Data System (ADS)

    Beals, Travis Roland

    Quantum computers enable dramatically more efficient algorithms for solving certain classes of computational problems, but, in doing so, they create new problems. In particular, Shor's Algorithm allows for efficient cryptanalysis of many public-key cryptosystems. As public key cryptography is a critical component of present-day electronic commerce, it is crucial that a working, secure replacement be found. Quantum key distribution (QKD), first developed by C.H. Bennett and G. Brassard, offers a partial solution, but many challenges remain, both in terms of hardware limitations and in designing cryptographic protocols for a viable large-scale quantum communication infrastructure. In Part I, I investigate optical lattice-based approaches to quantum information processing. I look at details of a proposal for an optical lattice-based quantum computer, which could potentially be used for both quantum communications and for more sophisticated quantum information processing. In Part III, I propose a method for converting and storing photonic quantum bits in the internal state of periodically-spaced neutral atoms by generating and manipulating a photonic band gap and associated defect states. In Part II, I present a cryptographic protocol which allows for the extension of present-day QKD networks over much longer distances without the development of new hardware. I also present a second, related protocol which effectively solves the authentication problem faced by a large QKD network, thus making QKD a viable, information-theoretic secure replacement for public key cryptosystems.

  10. Philosophy and Methods for China's Vocational Education Curriculum Reforms in the Early Twenty-First Century

    ERIC Educational Resources Information Center

    Xu, Guoqing

    2014-01-01

    Curriculum reform is an important aspect of progress in China's vocational education in the twenty-first century. The past decade's round of reforms were unprecedented in China in terms of both the scope and depth of their impact. They have and will continue to dramatically alter the nation's vocational education curriculum and teaching methods.…

  11. Analysis of health trait data from on-farm computer systems in the U.S. II: Comparison of genomic analyses including two-stage and single-step methods

    USDA-ARS?s Scientific Manuscript database

    The development of genomic selection methodology, with accompanying substantial gains in reliability for low-heritability traits, may dramatically improve the feasibility of genetic improvement of dairy cow health. Many methods for genomic analysis have now been developed, including the “Bayesian Al...

  12. Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead.

    PubMed

    Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika

    2017-02-01

    Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.

  13. New method of 2-dimensional metrology using mask contouring

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka

    2008-10-01

    We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.

  14. PaDe - The particle detection program

    NASA Astrophysics Data System (ADS)

    Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.

    2016-01-01

    This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.

  15. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  16. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE PAGES

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    2018-01-01

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  17. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  18. A Novel Design Framework for Structures/Materials with Enhanced Mechanical Performance

    PubMed Central

    Liu, Jie; Fan, Xiaonan; Wen, Guilin; Qing, Qixiang; Wang, Hongxin; Zhao, Gang

    2018-01-01

    Structure/material requires simultaneous consideration of both its design and manufacturing processes to dramatically enhance its manufacturability, assembly and maintainability. In this work, a novel design framework for structural/material with a desired mechanical performance and compelling topological design properties achieved using origami techniques is presented. The framework comprises four procedures, including topological design, unfold, reduction manufacturing, and fold. The topological design method, i.e., the solid isotropic material penalization (SIMP) method, serves to optimize the structure in order to achieve the preferred mechanical characteristics, and the origami technique is exploited to allow the structure to be rapidly and easily fabricated. Topological design and unfold procedures can be conveniently completed in a computer; then, reduction manufacturing, i.e., cutting, is performed to remove materials from the unfolded flat plate; the final structure is obtained by folding out the plate from the previous procedure. A series of cantilevers, consisting of origami parallel creases and Miura-ori (usually regarded as a metamaterial) and made of paperboard, are designed with the least weight and the required stiffness by using the proposed framework. The findings here furnish an alternative design framework for engineering structures that could be better than the 3D-printing technique, especially for large structures made of thin metal materials. PMID:29642555

  19. Carrier mobility in mesoscale heterogeneous organic materials: Effects of crystallinity and anisotropy on efficient charge transport

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hajime; Shirasawa, Raku; Nakamoto, Mitsunori; Hattori, Shinnosuke; Tomiya, Shigetaka

    2017-07-01

    Charge transport in the mesoscale bulk heterojunctions (BHJs) of organic photovoltaic devices (OPVs) is studied using multiscale simulations in combination with molecular dynamics, the density functional theory, the molecular-level kinetic Monte Carlo (kMC) method, and the coarse-grained kMC method, which was developed to estimate mesoscale carrier mobility. The effects of the degree of crystallinity and the anisotropy of the conductivity of donors on hole mobility are studied for BHJ structures that consist of crystalline and amorphous pentacene grains that act as donors and amorphous C60 grains that act as acceptors. We find that the hole mobility varies dramatically with the degree of crystallinity of pentacene because it is largely restricted by a low-mobility amorphous region that occurs in the hole transport network. It was also found that the percolation threshold of crystalline pentacene is relatively high at approximately 0.6. This high percolation threshold is attributed to the 2D-like conductivity of crystalline pentacene, and the threshold is greatly improved to a value of approximately 0.3 using 3D-like conductive donors. We propose essential guidelines to show that it is critical to increase the degree of crystallinity and develop 3D conductive donors for efficient hole transport through percolative networks in the BHJs of OPVs.

  20. Polydimethylsiloxane-Based Superhydrophobic Surfaces on Steel Substrate: Fabrication, Reversibly Extreme Wettability and Oil-Water Separation.

    PubMed

    Su, Xiaojing; Li, Hongqiang; Lai, Xuejun; Zhang, Lin; Liang, Tao; Feng, Yuchun; Zeng, Xingrong

    2017-01-25

    Functional surfaces for reversibly switchable wettability and oil-water separation have attracted much interest with pushing forward an immense influence on fundamental research and industrial application in recent years. This article proposed a facile method to fabricate superhydrophobic surfaces on steel substrates via electroless replacement deposition of copper sulfate (CuSO 4 ) and UV curing of vinyl-terminated polydimethylsiloxane (PDMS). PDMS-based superhydrophobic surfaces exhibited water contact angle (WCA) close to 160° and water sliding angle (WSA) lower than 5°, preserving outstanding chemical stability that maintained superhydrophobicity immersing in different aqueous solutions with pH values from 1 to 13 for 12 h. Interestingly, the superhydrophobic surface could dramatically switch to the superhydrophilic state under UV irradiation and then gradually recover to the highly hydrophobic state with WCA at 140° after dark storage. The underlying mechanism was also investigated by scanning electron microscopy, Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy. Additionally, the PDMS-based steel mesh possessed high separation efficiency and excellent reusability in oil-water separation. Our studies provide a simple, fast, and economical fabrication method for wettability-transformable superhydrophobic surfaces and have the potential applications in microfluidics, the biomedical field, and oil spill cleanup.

  1. Salient object detection based on multi-scale contrast.

    PubMed

    Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long

    2018-05-01

    Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. A Novel Design Framework for Structures/Materials with Enhanced Mechanical Performance.

    PubMed

    Liu, Jie; Fan, Xiaonan; Wen, Guilin; Qing, Qixiang; Wang, Hongxin; Zhao, Gang

    2018-04-09

    Abstract : Structure/material requires simultaneous consideration of both its design and manufacturing processes to dramatically enhance its manufacturability, assembly and maintainability. In this work, a novel design framework for structural/material with a desired mechanical performance and compelling topological design properties achieved using origami techniques is presented. The framework comprises four procedures, including topological design, unfold, reduction manufacturing, and fold. The topological design method, i.e., the solid isotropic material penalization (SIMP) method, serves to optimize the structure in order to achieve the preferred mechanical characteristics, and the origami technique is exploited to allow the structure to be rapidly and easily fabricated. Topological design and unfold procedures can be conveniently completed in a computer; then, reduction manufacturing, i.e., cutting, is performed to remove materials from the unfolded flat plate; the final structure is obtained by folding out the plate from the previous procedure. A series of cantilevers, consisting of origami parallel creases and Miura-ori (usually regarded as a metamaterial) and made of paperboard, are designed with the least weight and the required stiffness by using the proposed framework. The findings here furnish an alternative design framework for engineering structures that could be better than the 3D-printing technique, especially for large structures made of thin metal materials.

  3. Magnetic ordering induced giant optical property change in tetragonal BiFeO3

    NASA Astrophysics Data System (ADS)

    Tong, Wen-Yi; Ding, Hang-Chen; Gong, Shi Jing; Wan, Xiangang; Duan, Chun-Gang

    2015-12-01

    Magnetic ordering could have significant influence on band structures, spin-dependent transport, and other important properties of materials. Its measurement, especially for the case of antiferromagnetic (AFM) ordering, however, is generally difficult to be achieved. Here we demonstrate the feasibility of magnetic ordering detection using a noncontact and nondestructive optical method. Taking the tetragonal BiFeO3 (BFO) as an example and combining density functional theory calculations with tight-binding models, we find that when BFO changes from C1-type to G-type AFM phase, the top of valance band shifts from the Z point to Γ point, which makes the original direct band gap become indirect. This can be explained by Slater-Koster parameters using the Harrison approach. The impact of magnetic ordering on band dispersion dramatically changes the optical properties. For the linear ones, the energy shift of the optical band gap could be as large as 0.4 eV. As for the nonlinear ones, the change is even larger. The second-harmonic generation coefficient d33 of G-AFM becomes more than 13 times smaller than that of C1-AFM case. Finally, we propose a practical way to distinguish the two AFM phases of BFO using the optical method, which is of great importance in next-generation information storage technologies.

  4. A Data-Gathering Scheme with Joint Routing and Compressive Sensing Based on Modified Diffusion Wavelets in Wireless Sensor Networks.

    PubMed

    Gu, Xiangping; Zhou, Xiaofeng; Sun, Yanjing

    2018-02-28

    Compressive sensing (CS)-based data gathering is a promising method to reduce energy consumption in wireless sensor networks (WSNs). Traditional CS-based data-gathering approaches require a large number of sensor nodes to participate in each CS measurement task, resulting in high energy consumption, and do not guarantee load balance. In this paper, we propose a sparser analysis that depends on modified diffusion wavelets, which exploit sensor readings' spatial correlation in WSNs. In particular, a novel data-gathering scheme with joint routing and CS is presented. A modified ant colony algorithm is adopted, where next hop node selection takes a node's residual energy and path length into consideration simultaneously. Moreover, in order to speed up the coverage rate and avoid the local optimal of the algorithm, an improved pheromone impact factor is put forward. More importantly, theoretical proof is given that the equivalent sensing matrix generated can satisfy the restricted isometric property (RIP). The simulation results demonstrate that the modified diffusion wavelets' sparsity affects the sensor signal and has better reconstruction performance than DFT. Furthermore, our data gathering with joint routing and CS can dramatically reduce the energy consumption of WSNs, balance the load, and prolong the network lifetime in comparison to state-of-the-art CS-based methods.

  5. Adaptive Greedy Dictionary Selection for Web Media Summarization.

    PubMed

    Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo

    2017-01-01

    Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.

  6. Subsea Cable Tracking by Autonomous Underwater Vehicle with Magnetic Sensing Guidance.

    PubMed

    Xiang, Xianbo; Yu, Caoyang; Niu, Zemin; Zhang, Qin

    2016-08-20

    The changes of the seabed environment caused by a natural disaster or human activities dramatically affect the life span of the subsea buried cable. It is essential to track the cable route in order to inspect the condition of the buried cable and protect its surviving seabed environment. The magnetic sensor is instrumental in guiding the remotely-operated vehicle (ROV) to track and inspect the buried cable underseas. In this paper, a novel framework integrating the underwater cable localization method with the magnetic guidance and control algorithm is proposed, in order to enable the automatic cable tracking by a three-degrees-of-freedom (3-DOF) under-actuated autonomous underwater vehicle (AUV) without human beings in the loop. The work relies on the passive magnetic sensing method to localize the subsea cable by using two tri-axial magnetometers, and a new analytic formulation is presented to compute the heading deviation, horizontal offset and buried depth of the cable. With the magnetic localization, the cable tracking and inspection mission is elaborately constructed as a straight-line path following control problem in the horizontal plane. A dedicated magnetic line-of-sight (LOS) guidance is built based on the relative geometric relationship between the vehicle and the cable, and the feedback linearizing technique is adopted to design a simplified cable tracking controller considering the side-slip effects, such that the under-actuated vehicle is able to move towards the subsea cable and then inspect its buried environment, which further guides the environmental protection of the cable by setting prohibited fishing/anchoring zones and increasing the buried depth. Finally, numerical simulation results show the effectiveness of the proposed magnetic guidance and control algorithm on the envisioned subsea cable tracking and the potential protection of the seabed environment along the cable route.

  7. Subsea Cable Tracking by Autonomous Underwater Vehicle with Magnetic Sensing Guidance

    PubMed Central

    Xiang, Xianbo; Yu, Caoyang; Niu, Zemin; Zhang, Qin

    2016-01-01

    The changes of the seabed environment caused by a natural disaster or human activities dramatically affect the life span of the subsea buried cable. It is essential to track the cable route in order to inspect the condition of the buried cable and protect its surviving seabed environment. The magnetic sensor is instrumental in guiding the remotely-operated vehicle (ROV) to track and inspect the buried cable underseas. In this paper, a novel framework integrating the underwater cable localization method with the magnetic guidance and control algorithm is proposed, in order to enable the automatic cable tracking by a three-degrees-of-freedom (3-DOF) under-actuated autonomous underwater vehicle (AUV) without human beings in the loop. The work relies on the passive magnetic sensing method to localize the subsea cable by using two tri-axial magnetometers, and a new analytic formulation is presented to compute the heading deviation, horizontal offset and buried depth of the cable. With the magnetic localization, the cable tracking and inspection mission is elaborately constructed as a straight-line path following control problem in the horizontal plane. A dedicated magnetic line-of-sight (LOS) guidance is built based on the relative geometric relationship between the vehicle and the cable, and the feedback linearizing technique is adopted to design a simplified cable tracking controller considering the side-slip effects, such that the under-actuated vehicle is able to move towards the subsea cable and then inspect its buried environment, which further guides the environmental protection of the cable by setting prohibited fishing/anchoring zones and increasing the buried depth. Finally, numerical simulation results show the effectiveness of the proposed magnetic guidance and control algorithm on the envisioned subsea cable tracking and the potential protection of the seabed environment along the cable route. PMID:27556465

  8. Design process and preliminary psychometric study of a video game to detect cognitive impairment in senior adults

    PubMed Central

    Perez-Rodriguez, Roberto; Facal, David; Fernandez-Iglesias, Manuel J.; Anido-Rifon, Luis; Mouriño-Garcia, Marcos

    2017-01-01

    Introduction Assessment of episodic memory has been traditionally used to evaluate potential cognitive impairments in senior adults. Typically, episodic memory evaluation is based on personal interviews and pen-and-paper tests. This article presents the design, development and a preliminary validation of a novel digital game to assess episodic memory intended to overcome the limitations of traditional methods, such as the cost of its administration, its intrusive character, the lack of early detection capabilities, the lack of ecological validity, the learning effect and the existence of confounding factors. Materials and Methods Our proposal is based on the gamification of the California Verbal Learning Test (CVLT) and it has been designed to comply with the psychometric characteristics of reliability and validity. Two qualitative focus groups and a first pilot experiment were carried out to validate the proposal. Results A more ecological, non-intrusive and better administrable tool to perform cognitive assessment was developed. Initial evidence from the focus groups and pilot experiment confirmed the developed game’s usability and offered promising results insofar its psychometric validity is concerned. Moreover, the potential of this game for the cognitive classification of senior adults was confirmed, and administration time is dramatically reduced with respect to pen-and-paper tests. Limitations Additional research is needed to improve the resolution of the game for the identification of specific cognitive impairments, as well as to achieve a complete validation of the psychometric properties of the digital game. Conclusion Initial evidence show that serious games can be used as an instrument to assess the cognitive status of senior adults, and even to predict the onset of mild cognitive impairments or Alzheimer’s disease. PMID:28674661

  9. Nonadiabatic dynamics in intense continuous wave laser fields and real-time observation of the associated wavepacket bifurcation in terms of spectrogram of induced photon emission.

    PubMed

    Mizuno, Yuta; Arasaki, Yasuki; Takatsuka, Kazuo

    2016-11-14

    We propose a theoretical principle to directly monitor the bifurcation of quantum wavepackets passing through nonadiabatic regions of a molecule that is placed in intense continuous wave (CW) laser fields. This idea makes use of the phenomenon of laser-driven photon emission from molecules that can undergo nonadiabatic transitions between ionic and covalent potential energy surfaces like Li + F - and LiF. The resultant photon emission spectra are of anomalous yet characteristic frequency and intensity, if pumped to an energy level in which the nonadiabatic region is accessible and placed in a CW laser field. The proposed method is designed to take the time-frequency spectrogram with an appropriate time-window from this photon emission to detect the time evolution of the frequency and intensity, which depends on the dynamics and location of the relevant nuclear wavepackets. This method is specifically designed for the study of dynamics in intense CW laser fields and is rather limited in scope than other techniques for femtosecond chemical dynamics in vacuum. The following characteristic features of dynamics can be mapped onto the spectrogram: (1) the period of driven vibrational motion (temporally confined vibrational states in otherwise dissociative channels, the period and other states of which dramatically vary depending on the CW driving lasers applied), (2) the existence of multiple nuclear wavepackets running individually on the field-dressed potential energy surfaces, (3) the time scale of coherent interaction between the nuclear wavepackets running on ionic and covalent electronic states after their branching (the so-called coherence time in the terminology of the theory of nonadiabatic interaction), and so on.

  10. Nonadiabatic dynamics in intense continuous wave laser fields and real-time observation of the associated wavepacket bifurcation in terms of spectrogram of induced photon emission

    NASA Astrophysics Data System (ADS)

    Mizuno, Yuta; Arasaki, Yasuki; Takatsuka, Kazuo

    2016-11-01

    We propose a theoretical principle to directly monitor the bifurcation of quantum wavepackets passing through nonadiabatic regions of a molecule that is placed in intense continuous wave (CW) laser fields. This idea makes use of the phenomenon of laser-driven photon emission from molecules that can undergo nonadiabatic transitions between ionic and covalent potential energy surfaces like Li+ F- and LiF. The resultant photon emission spectra are of anomalous yet characteristic frequency and intensity, if pumped to an energy level in which the nonadiabatic region is accessible and placed in a CW laser field. The proposed method is designed to take the time-frequency spectrogram with an appropriate time-window from this photon emission to detect the time evolution of the frequency and intensity, which depends on the dynamics and location of the relevant nuclear wavepackets. This method is specifically designed for the study of dynamics in intense CW laser fields and is rather limited in scope than other techniques for femtosecond chemical dynamics in vacuum. The following characteristic features of dynamics can be mapped onto the spectrogram: (1) the period of driven vibrational motion (temporally confined vibrational states in otherwise dissociative channels, the period and other states of which dramatically vary depending on the CW driving lasers applied), (2) the existence of multiple nuclear wavepackets running individually on the field-dressed potential energy surfaces, (3) the time scale of coherent interaction between the nuclear wavepackets running on ionic and covalent electronic states after their branching (the so-called coherence time in the terminology of the theory of nonadiabatic interaction), and so on.

  11. Multibeam monopulse radar for airborne sense and avoid system

    NASA Astrophysics Data System (ADS)

    Gorwara, Ashok; Molchanov, Pavlo

    2016-10-01

    The multibeam monopulse radar for Airborne Based Sense and Avoid (ABSAA) system concept is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. In the proposed system the multibeam monopulse radar with an array of directional antennas is positioned on a small aircaraft or Unmanned Aircraft System (UAS). Radar signals are simultaneously transmitted and received by multiple angle shifted directional antennas with overlapping antenna patterns and the entire sky, 360° for both horizontal and vertical coverage. Digitizing of amplitude and phase of signals in separate directional antennas relative to reference signals provides high-accuracy high-resolution range and azimuth measurement and allows to record real time amplitude and phase of reflected from non-cooperative aircraft signals. High resolution range and azimuth measurement provides minimal tracking errors in both position and velocity of non-cooperative aircraft and determined by sampling frequency of the digitizer. High speed sampling with high-accuracy processor clock provides high resolution phase/time domain measurement even for directional antennas with wide Field of View (FOV). Fourier transform (frequency domain processing) of received radar signals provides signatures and dramatically increases probability of detection for non-cooperative aircraft. Steering of transmitting power and integration, correlation period of received reflected signals for separate antennas (directions) allows dramatically decreased ground clutter for low altitude flights. An open architecture, modular construction allows the combination of a radar sensor with Automatic Dependent Surveillance - Broadcast (ADS-B), electro-optic, acoustic sensors.

  12. Psycho-informatics: Big Data shaping modern psychometrics.

    PubMed

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. DNA-BASED METHODS FOR MONITORING INVASIVE SPECIES: A REVIEW AND PROSPECTUS

    EPA Science Inventory

    The recent explosion of interest in DNA-based tools for species identification has prompted widespread speculation on the future availability of inexpensive, rapid and accurate means of identifying specimens and assessing biodiversity. One applied field that may benefit dramatic...

  14. Autism and Pervasive Developmental Disorders

    ERIC Educational Resources Information Center

    Volkmar, Fred R.; Lord, Catherine; Bailey, Anthony; Schultz, Robert T.; Klin, Ami

    2004-01-01

    The quantity and quality of research into autism and related conditions have increased dramatically in recent years. Consequently we selectively review key accomplishments and highlight directions for future research. More consistent approaches to diagnosis and more rigorous assessment methods have significantly advanced research, although the…

  15. Epistemic-based investigation of the probability of hazard scenarios using Bayesian network for the lifting operation of floating objects

    NASA Astrophysics Data System (ADS)

    Toroody, Ahmad Bahoo; Abaiee, Mohammad Mahdi; Gholamnia, Reza; Ketabdari, Mohammad Javad

    2016-09-01

    Owing to the increase in unprecedented accidents with new root causes in almost all operational areas, the importance of risk management has dramatically risen. Risk assessment, one of the most significant aspects of risk management, has a substantial impact on the system-safety level of organizations, industries, and operations. If the causes of all kinds of failure and the interactions between them are considered, effective risk assessment can be highly accurate. A combination of traditional risk assessment approaches and modern scientific probability methods can help in realizing better quantitative risk assessment methods. Most researchers face the problem of minimal field data with respect to the probability and frequency of each failure. Because of this limitation in the availability of epistemic knowledge, it is important to conduct epistemic estimations by applying the Bayesian theory for identifying plausible outcomes. In this paper, we propose an algorithm and demonstrate its application in a case study for a light-weight lifting operation in the Persian Gulf of Iran. First, we identify potential accident scenarios and present them in an event tree format. Next, excluding human error, we use the event tree to roughly estimate the prior probability of other hazard-promoting factors using a minimal amount of field data. We then use the Success Likelihood Index Method (SLIM) to calculate the probability of human error. On the basis of the proposed event tree, we use the Bayesian network of the provided scenarios to compensate for the lack of data. Finally, we determine the resulting probability of each event based on its evidence in the epistemic estimation format by building on two Bayesian network types: the probability of hazard promotion factors and the Bayesian theory. The study results indicate that despite the lack of available information on the operation of floating objects, a satisfactory result can be achieved using epistemic data.

  16. Light-harvesting organic photoinitiators of polymerization.

    PubMed

    Lalevée, Jacques; Tehfe, Mohamad-Ali; Dumur, Frédéric; Gigmes, Didier; Graff, Bernadette; Morlet-Savary, Fabrice; Fouassier, Jean-Pierre

    2013-02-12

    Two new photoinitiators with unprecedented light absorption properties are proposed on the basis of a suitable truxene skeleton where several UV photoinitiators PI units such as benzophenone and thioxanthone are introduced at the periphery and whose molecular orbitals MO can be coupled with those of the PI units: a red-shifted absorption and a strong increase of the molecular extinction coefficients (by a ≈ 20-1000 fold factor) are found. These compounds are highly efficient light-harvesting photoinitiators. The scope and practicality of these photoinitiators of polymerization can be dramatically expanded, that is, both radical and cationic polymerization processes are accessible upon very soft irradiation conditions (halogen lamp, LED…︁) thanks to the unique light absorption properties of the new proposed structures. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. a Conceptual Framework for Indoor Mapping by Using Grammars

    NASA Astrophysics Data System (ADS)

    Hu, X.; Fan, H.; Zipf, A.; Shang, J.; Gu, F.

    2017-09-01

    Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users' location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  18. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  19. U.S. Army Public Affairs Officers and Social Media Training Requirements

    DTIC Science & Technology

    2016-06-10

    ABSTRACT Social media platforms have become an effective and efficient method used by U.S. Army organizations to deliver and communicate messages to...standards. 15. SUBJECT TERMS Public Affairs Officer, Social Media Training, Communications , Social Media Platforms, Training 16. SECURITY...methods to directly communicate with various audiences. The dramatic impact of social media in the information environment has created a shift, and caused

  20. Two RFID standard-based security protocols for healthcare environments.

    PubMed

    Picazo-Sanchez, Pablo; Bagheri, Nasour; Peris-Lopez, Pedro; Tapiador, Juan E

    2013-10-01

    Radio Frequency Identification (RFID) systems are widely used in access control, transportation, real-time inventory and asset management, automated payment systems, etc. Nevertheless, the use of this technology is almost unexplored in healthcare environments, where potential applications include patient monitoring, asset traceability and drug administration systems, to mention just a few. RFID technology can offer more intelligent systems and applications, but privacy and security issues have to be addressed before its adoption. This is even more dramatical in healthcare applications where very sensitive information is at stake and patient safety is paramount. In Wu et al. (J. Med. Syst. 37:19, 43) recently proposed a new RFID authentication protocol for healthcare environments. In this paper we show that this protocol puts location privacy of tag holders at risk, which is a matter of gravest concern and ruins the security of this proposal. To facilitate the implementation of secure RFID-based solutions in the medical sector, we suggest two new applications (authentication and secure messaging) and propose solutions that, in contrast to previous proposals in this field, are fully based on ISO Standards and NIST Security Recommendations.

  1. Reconfigurable multiport EPON repeater

    NASA Astrophysics Data System (ADS)

    Oishi, Masayuki; Inohara, Ryo; Agata, Akira; Horiuchi, Yukio

    2009-11-01

    An extended reach EPON repeater is one of the solutions to effectively expand FTTH service areas. In this paper, we propose a reconfigurable multi-port EPON repeater for effective accommodation of multiple ODNs with a single OLT line card. The proposed repeater, which has multi-ports in both OLT and ODN sides, consists of TRs, BTRs with the CDR function and a reconfigurable electrical matrix switch, can accommodate multiple ODNs to a single OLT line card by controlling the connection of the matrix switch. Although conventional EPON repeaters require full OLT line cards to accommodate subscribers from the initial installation stage, the proposed repeater can dramatically reduce the number of required line cards especially when the number of subscribers is less than a half of the maximum registerable users per OLT. Numerical calculation results show that the extended reach EPON system with the proposed EPON repeater can save 17.5% of the initial installation cost compared with a conventional repeater, and can be less expensive than conventional systems up to the maximum subscribers especially when the percentage of ODNs in lightly-populated areas is higher.

  2. Recruiting post-doctoral fellows into global health research: selecting NIH Fogarty International Clinical Research Fellows.

    PubMed

    Heimburger, Douglas C; Warner, Tokesha L; Carothers, Catherine Lem; Blevins, Meridith; Thomas, Yolanda; Gardner, Pierce; Primack, Aron; Vermund, Sten H

    2014-08-01

    From 2008 to 2012, the National Institutes of Health (NIH) Fogarty International Clinical Research Fellows Program (FICRF) provided 1-year mentored research training at low- and middle-income country sites for American and international post-doctoral health professionals. We examined the FICRF applicant pool, proposed research topics, selection process, and characteristics of enrollees to assess trends in global health research interest and factors associated with applicant competitiveness. The majority (58%) of 67 US and 57 international Fellows were women, and 83% of Fellows had medical degrees. Most applicants were in clinical fellowships (41%) or residencies (24%). More applicants proposing infectious disease projects were supported (59%) than applicants proposing non-communicable disease (NCD) projects (41%), although projects that combined both topic areas were most successful (69%). The numbers of applicants proposing research on NCDs and the numbers of these applicants awarded fellowships rose dramatically over time. Funding provided to the FICRF varied significantly among NIH Institutes and Centers and was strongly associated with the research topics awarded. © The American Society of Tropical Medicine and Hygiene.

  3. Data assimilation in the low noise regime

    NASA Astrophysics Data System (ADS)

    Weare, J.; Vanden-Eijnden, E.

    2012-12-01

    On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.

  4. Toward a Regulatory Framework for the Waterpipe.

    PubMed

    Salloum, Ramzi G; Asfar, Taghrid; Maziak, Wasim

    2016-10-01

    Waterpipe smoking has been dramatically increasing among youth worldwide and in the United States. Despite its general association with misperceptions of reduced harm, evidence suggests this is a harmful and dependence-inducing tobacco use method that represents a threat to public health. Waterpipe products continue to be generally unregulated, which likely has contributed to their spread. The Family Smoking Prevention and Tobacco Control Act of 2009 granted the US Food and Drug Administration (FDA) the authority to regulate waterpipe products, and the FDA finalized a rule extending its authority over waterpipe products in May 2016. This critical step in addressing the alarming increase in waterpipe smoking in the United States has created urgency for research to provide the evidence needed for effective regulatory initiatives for waterpipe products. We aim to stimulate such research by providing a framework that addresses the scope of waterpipe products and their unique context and use patterns. The proposed framework identifies regulatory targets for waterpipe product components (i.e., tobacco, charcoal, and device), the waterpipe café setting, and its marketing environment dominated by Internet promotion.

  5. Effect of interfacial composition on Ag-based Ohmic contact of GaN-based vertical light emitting diodes

    NASA Astrophysics Data System (ADS)

    Wu, Ning; Xiong, Zhihua; Qin, Zhenzhen

    2018-02-01

    By investigating the effect of a defective interface structure on Ag-based Ohmic contact of GaN-based vertical light-emitting diodes, we found a direct relationship between the interfacial composition and the Schottky barrier height of the Ag(111)/GaN(0001) interface. It was demonstrated that the Schottky barrier height of a defect-free Ag(111)/GaN(0001) interface was 2.221 eV, and it would be dramatically decreased to 0.375 eV with the introduction of one Ni atom and one Ga vacancy at the interface structure. It was found that the tunability of the Schottky barrier height can be attributed to charge accumulations around the interfacial defective regions and an unpinning of the Fermi level, which explains the experimental phenomenon of Ni-assisted annealing improving the p-type Ohmic contact characteristic. Lastly, we propose a new method of using Cu as an assisted metal to realize a novel Ag-based Ohmic contact. These results provide a guideline for the fabrication of high-quality Ag-based Ohmic contact of GaN-based vertical light-emitting diodes.

  6. Pilot-multiplexed continuous-variable quantum key distribution with a real local oscillator

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua

    2018-01-01

    We propose a pilot-multiplexed continuous-variable quantum key distribution (CVQKD) scheme based on a local local oscillator (LLO). Our scheme utilizes time-multiplexing and polarization-multiplexing techniques to dramatically isolate the quantum signal from the pilot, employs two heterodyne detectors to separately detect the signal and the pilot, and adopts a phase compensation method to almost eliminate the multifrequency phase jitter. In order to analyze the performance of our scheme, a general LLO noise model is constructed. Besides the phase noise and the modulation noise, the photon-leakage noise from the reference path and the quantization noise due to the analog-to-digital converter (ADC) are also considered, which are first analyzed in the LLO regime. Under such general noise model, our scheme has a higher key rate and longer secure distance compared with the preexisting LLO schemes. Moreover, we also conduct an experiment to verify our pilot-multiplexed scheme. Results show that it maintains a low level of the phase noise and is expected to obtain a 554-Kbps secure key rate within a 15-km distance under the finite-size effect.

  7. Optics-based approach to thermal management of photovoltaics: Selective-spectral and radiative cooling

    DOE PAGES

    Sun, Xingshu; Silverman, Timothy J.; Zhou, Zhiguang; ...

    2017-01-20

    For commercial one-sun solar modules, up to 80% of the incoming sunlight may be dissipated as heat, potentially raising the temperature 20-30 °C higher than the ambient. In the long term, extreme self-heating erodes efficiency and shortens lifetime, thereby dramatically reducing the total energy output. Therefore, it is critically important to develop effective and practical (and preferably passive) cooling methods to reduce operating temperature of photovoltaic (PV) modules. In this paper, we explore two fundamental (but often overlooked) origins of PV self-heating, namely, sub-bandgap absorption and imperfect thermal radiation. The analysis suggests that we redesign the optical properties of themore » solar module to eliminate parasitic absorption (selective-spectral cooling) and enhance thermal emission (radiative cooling). Comprehensive opto-electro-thermal simulation shows that the proposed techniques would cool one-sun terrestrial solar modules up to 10 °C. As a result, this self-cooling would substantially extend the lifetime for solar modules, with corresponding increase in energy yields and reduced levelized cost of electricity.« less

  8. Improvement of Energy Capacity with Vitamin C Treated Dual-Layered Graphene-Sulfur Cathodes in Lithium-Sulfur Batteries.

    PubMed

    Kim, Jin Won; Ocon, Joey D; Kim, Ho-Sung; Lee, Jaeyoung

    2015-09-07

    A graphene-based cathode design for lithium-sulfur batteries (LSB) that shows excellent electrochemical performance is proposed. The dual-layered cathode is composed of a sulfur active layer and a polysulfide absorption layer, and both layers are based on vitamin C treated graphene oxide at various degrees of reduction. By controlling the degree of reduction of graphene, the dual-layered cathode can increase sulfur utilization dramatically owing to the uniform formation of nanosized sulfur particles, the chemical bonding of dissolved polysulfides on the oxygen-rich sulfur active layer, and the physisorption of free polysulfides on the absorption layer. This approach enables a LSB with a high specific capacity of over 600 mAh gsulfur (-1) after 100 cycles even under a high current rate of 1C (1675 mA gsulfur (-1) ). An intriguing aspect of our work is the synthesis of a high-performance dual-layered cathode by a green chemistry method, which could be a promising approach to LSBs with high energy and power densities. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Molecular simulation of the effect of cholesterol on lipid-mediated protein-protein interactions.

    PubMed

    de Meyer, Frédérick J-M; Rodgers, Jocelyn M; Willems, Thomas F; Smit, Berend

    2010-12-01

    Experiments and molecular simulations have shown that the hydrophobic mismatch between proteins and membranes contributes significantly to lipid-mediated protein-protein interactions. In this article, we discuss the effect of cholesterol on lipid-mediated protein-protein interactions as function of hydrophobic mismatch, protein diameter and protein cluster size, lipid tail length, and temperature. To do so, we study a mesoscopic model of a hydrated bilayer containing lipids and cholesterol in which proteins are embedded, with a hybrid dissipative particle dynamics-Monte Carlo method. We propose a mechanism by which cholesterol affects protein interactions: protein-induced, cholesterol-enriched, or cholesterol-depleted lipid shells surrounding the proteins affect the lipid-mediated protein-protein interactions. Our calculations of the potential of mean force between proteins and protein clusters show that the addition of cholesterol dramatically reduces repulsive lipid-mediated interactions between proteins (protein clusters) with positive mismatch, but does not affect attractive interactions between proteins with negative mismatch. Cholesterol has only a modest effect on the repulsive interactions between proteins with different mismatch. Copyright © 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Crossing the implementation chasm: a proposal for bold action.

    PubMed

    Lorenzi, Nancy M; Novak, Laurie L; Weiss, Jacob B; Gadd, Cynthia S; Unertl, Kim M

    2008-01-01

    As health care organizations dramatically increase investment in information technology (IT) and the scope of their IT projects, implementation failures become critical events. Implementation failures cause stress on clinical units, increase risk to patients, and result in massive costs that are often not recoverable. At an estimated 28% success rate, the current level of investment defies management logic. This paper asserts that there are "chasms" in IT implementations that represent risky stages in the process. Contributors to the chasms are classified into four categories: design, management, organization, and assessment. The American College of Medical Informatics symposium participants recommend bold action to better understand problems and challenges in implementation and to improve the ability of organizations to bridge these implementation chasms. The bold action includes the creation of a Team Science for Implementation strategy that allows for participation from multiple institutions to address the long standing and costly implementation issues. The outcomes of this endeavor will include a new focus on interdisciplinary research and an inter-organizational knowledge base of strategies and methods to optimize implementations and subsequent achievement of organizational objectives.

  11. Hybrid ZnO/ZnS nanoforests as the electrode materials for high performance supercapacitor application.

    PubMed

    Zhang, Siwen; Yin, Bosi; Jiang, He; Qu, Fengyu; Umar, Ahmad; Wu, Xiang

    2015-02-07

    Heterostructured ZnO/ZnS nanoforests are prepared through a simple two-step thermal evaporation method at 650 °C and 1300 °C in a tube furnace under the flow of argon gas, respectively. A metal catalyst (Au) to form a binary alloy has been used in the process. The as-obtained ZnO/ZnS products are characterized by using a series of techniques, including scanning electron microscopy (SEM), X-ray diffraction (XRD), energy dispersion X-ray spectroscopy (EDS), Raman spectroscopy and photoluminescence. A possible growth mechanism is temporarily proposed. The hybrid structures are also directly functionalized as supercapacitor (SC) electrodes without using any ancillary materials such as carbon black or binder. Results show that the as-synthesized ZnO/ZnS heterostructures exhibit a greatly reduced ultraviolet emission and dramatically enhanced green emission compared to pure ZnO nanorods. The SCs data demonstrate high specific capacitance of 217 mF cm(-2) at 1 mA cm(-2) and excellent cyclic performance with 82% capacity retention after 2000 cycles at a current density of 2.0 mA cm(-2).

  12. Toward a Regulatory Framework for the Waterpipe

    PubMed Central

    Salloum, Ramzi G.; Asfar, Taghrid

    2016-01-01

    Waterpipe smoking has been dramatically increasing among youth worldwide and in the United States. Despite its general association with misperceptions of reduced harm, evidence suggests this is a harmful and dependence-inducing tobacco use method that represents a threat to public health. Waterpipe products continue to be generally unregulated, which likely has contributed to their spread. The Family Smoking Prevention and Tobacco Control Act of 2009 granted the US Food and Drug Administration (FDA) the authority to regulate waterpipe products, and the FDA finalized a rule extending its authority over waterpipe products in May 2016. This critical step in addressing the alarming increase in waterpipe smoking in the United States has created urgency for research to provide the evidence needed for effective regulatory initiatives for waterpipe products. We aim to stimulate such research by providing a framework that addresses the scope of waterpipe products and their unique context and use patterns. The proposed framework identifies regulatory targets for waterpipe product components (i.e., tobacco, charcoal, and device), the waterpipe café setting, and its marketing environment dominated by Internet promotion. PMID:27552262

  13. Fumaric acid production in Saccharomyces cerevisiae by simultaneous use of oxidative and reductive routes.

    PubMed

    Xu, Guoqiang; Chen, Xiulai; Liu, Liming; Jiang, Linghuo

    2013-11-01

    In this study, the simultaneous use of reductive and oxidative routes to produce fumaric acid was explored. The strain FMME003 (Saccharomyces cerevisiae CEN.PK2-1CΔTHI2) exhibited capability to accumulate pyruvate and was used for fumaric acid production. The fum1 mutant FMME004 could produce fumaric acid via oxidative route, but the introduction of reductive route derived from Rhizopus oryzae NRRL 1526 led to lower fumaric acid production. Analysis of the key factors associated with fumaric acid production revealed that pyruvate carboxylase had a low degree of control over the carbon flow to malic acid. The fumaric acid titer was improved dramatically when the heterologous gene RoPYC was overexpressed and 32 μg/L of biotin was added. Furthermore, under the optimal carbon/nitrogen ratio, the engineered strain FMME004-6 could produce up to 5.64 ± 0.16 g/L of fumaric acid. These results demonstrated that the proposed fermentative method is efficient for fumaric acid production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Fast Time Response Electromagnetic Particle Injection System for Disruption Mitigation

    NASA Astrophysics Data System (ADS)

    Raman, Roger; Lay, W.-S.; Jarboe, T. R.; Menard, J. E.; Ono, M.

    2017-10-01

    Predicting and controlling disruptions is an urgent issue for ITER. In this proposed method, a radiative payload consisting of micro spheres of Be, BN, B, or other acceptable low-Z materials would be injected inside the q =2 surface for thermal and runaway electron mitigation. The radiative payload would be accelerated to the required velocities (0.2 to >1km/s) in an Electromagnetic Particle Injector (EPI). An important advantage of the EPI system is that it could be positioned very close to the reactor vessel. This has the added benefit that the external field near a high-field tokamak dramatically improves the injector performance, while simultaneously reducing the system response time. A NSTX-U / DIII-D scale system has been tested off-line to verify the critical parameters - the projected system response time and attainable velocities. Both are consistent with the model calculations, giving confidence that an ITER-scale system could be built to ensure safety of the ITER device. This work is supported by U.S. DOE Contracts: DE-AC02-09CH11466, DE-FG02-99ER54519 AM08, and DE-SC0006757.

  15. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodson, R.; Rioja, M.; Imai, H.

    2013-06-15

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 mmore » in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.« less

  16. Past Accomplishments and Future Challenges

    ERIC Educational Resources Information Center

    Danielson, Louis; Doolittle, Jennifer; Bradley, Renee

    2005-01-01

    Three broad issues continue to dramatically impact the education of children with specific learning disabilities (SLD): (1) the development and implementation of scientifically defensible methods of identification; (2) the development and implementation of scientific interventions to ensure that children with SLD have access to and make progress…

  17. Communication Technologies Promoting Educational Communities with Scholarship of Engagement

    ERIC Educational Resources Information Center

    Georgakopoulos, Alexia; Hawkins, Steven T.

    2013-01-01

    Purpose: This study aims to present Dramatic Problem Solving Facilitation Model (DPSFM) and Interactive Management (IM) as innovative alternative dispute resolution approaches that incorporate communication technologies in recording and analyzing data. DPSFM utilizes performance-based actions with facilitation methods to help participants design…

  18. Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors.

    PubMed

    Godin, Thomas; Fromager, Michael; Cagniot, Emmanuel; Brunel, Marc; Aït-Ameur, Kamel

    2013-12-01

    We propose a new device that is able to perform highly sensitive wavefront measurements based on the use of continuous position sensitive detectors and without resorting to any reconstruction process. We demonstrate experimentally its ability to measure small wavefront distortions through the characterization of pump-induced refractive index changes in laser material. In addition, it is shown using computer-generated holograms that this device can detect phase discontinuities as well as improve the quality of sharp phase variations measurements. Results are compared to reference Shack-Hartmann measurements, and dramatic enhancements are obtained.

  19. Ductilizing bulk metallic glass composite by tailoring stacking fault energy.

    PubMed

    Wu, Y; Zhou, D Q; Song, W L; Wang, H; Zhang, Z Y; Ma, D; Wang, X L; Lu, Z P

    2012-12-14

    Martensitic transformation was successfully introduced to bulk metallic glasses as the reinforcement micromechanism. In this Letter, it was found that the twinning property of the reinforcing crystals can be dramatically improved by reducing the stacking fault energy through microalloying, which effectively alters the electron charge density redistribution on the slipping plane. The enhanced twinning propensity promotes the martensitic transformation of the reinforcing austenite and, consequently, improves plastic stability and the macroscopic tensile ductility. In addition, a general rule to identify effective microalloying elements based on their electronegativity and atomic size was proposed.

  20. Green Tunnel Construction Technology and Application

    NASA Astrophysics Data System (ADS)

    Zhang, J. L.; Shi, P. X.; Huang, J.; Li, H. G.; Zhou, X. Q.

    2018-05-01

    With the dramatic growth of urban tunnels in recent years, energy saving and environmental protection have received intensive attention in tunnel construction and operation. As reference to the concept of green buildings, this paper proposes the concept of green tunnels. Combining with the key issues of tunnel design, construction, operation and maintenance, the major aspects of green tunnels including prefabricated construction, noise control, ventilation & lighting energy saving, and digital intelligent maintenance are discussed and the future development of green tunnels is outlined with the economic and social benefits as indicators.

  1. Direct diode-pumped Kerr-lens mode-locked Ti:sapphire laser

    PubMed Central

    Durfee, Charles G.; Storz, Tristan; Garlick, Jonathan; Hill, Steven; Squier, Jeff A.; Kirchner, Matthew; Taft, Greg; Shea, Kevin; Kapteyn, Henry; Murnane, Margaret; Backus, Sterling

    2012-01-01

    We describe a Ti:sapphire laser pumped directly with a pair of 1.2W 445nm laser diodes. With over 30mW average power at 800 nm and a measured pulsewidth of 15fs, Kerr-lens-modelocked pulses are available with dramatically decreased pump cost. We propose a simple model to explain the observed highly stable Kerr-lens modelocking in spite of the fact that both the mode-locked and continuous-wave modes are smaller than the pump mode in the crystal. PMID:22714433

  2. A nickel catalyst for the addition of organoboronate esters to ketones and aldehydes.

    PubMed

    Bouffard, Jean; Itami, Kenichiro

    2009-10-01

    A Ni(cod)(2)/IPr catalyst promotes the intermolecular 1,2-addition of arylboronate esters to unactivated aldehydes and ketones. Diaryl, alkyl aryl, and dialkyl ketones show good reactivity under mild reaction conditions (< or = 80 degrees C, nonpolar solvents, no strong base or acid additives). A dramatic ligand effect favors either carbonyl addition (IPr) or C-OR cross-coupling (PCy(3)) with aryl ether substrates. A Ni(0)/Ni(II) catalytic cycle initiated by the oxidative cyclization of the carbonyl substrate is proposed.

  3. Avoided criticality and slow relaxation in frustrated two-dimensional models

    DOE PAGES

    Esterlis, Ilya; Kivelson, Steven A.; Tarjus, Gilles

    2017-10-23

    Here, frustration and the associated phenomenon of “avoided criticality” have been proposed as an explanation for the dramatic relaxation slowdown in glass-forming liquids. To test this, we have undertaken a Monte Carlo study of possibly the simplest such problem, the two-dimensional XY model with frustration corresponding to a small flux f per plaquette. At f = 0, there is a Berezinskii-Kosterlitz-Thouless transition at T*, but at any small but nonzero f, this transition is avoided and replaced (presumably) by a vortex-ordering transition at much lower temperatures. We thus have studied the evolution of the dynamics for small and moderate fmore » as the system is cooled from above T* to below. Although we do find strongly temperature-dependent slowing of the dynamics as T crosses T* and that simultaneously the dynamics becomes more complex, neither effect is anywhere nearly as dramatic as the corresponding phenomena in glass-forming liquids. At the very least, this implies that the properties of supercooled liquids must depend on more than frustration and the existence of an avoided transition.« less

  4. The rising level of medical student debt: potential risk for a national default.

    PubMed

    Ariyan, S

    2000-04-01

    At the turn of the 20th century, mostly as a result of the Flexner report, medical education changed dramatically by establishing a scientific basis for the study of medicine within the institutions of the major universities. There have been major and dramatic changes in medicine during the past 80 years that have improved medical education in the United States, but these changes have also placed major economic strains on students who have educational debts. If medicine is a social responsibility to the public, then the public should share the responsibility of identifying and supporting new approaches to funding and financially managing the teaching of future physicians. There is no universal solution because there are various approaches institutions may take to structure these financial responsibilities. This article describes trends in medical student educational debt, identifies the financial needs of medical students, and proposes ways of addressing those needs to avert a possible national financial crisis among medical students. We must invest in medical students because they will be the leaders we need to help care for our society and our own families in the next century.

  5. Increased Gut Redox and Depletion of Anaerobic and Methanogenic Prokaryotes in Severe Acute Malnutrition

    PubMed Central

    Million, Matthieu; Tidjani Alou, Maryam; Khelaifia, Saber; Bachar, Dipankar; Lagier, Jean-Christophe; Dione, Niokhor; Brah, Souleymane; Hugon, Perrine; Lombard, Vincent; Armougom, Fabrice; Fromonot, Julien; Robert, Catherine; Michelle, Caroline; Diallo, Aldiouma; Fabre, Alexandre; Guieu, Régis; Sokhna, Cheikh; Henrissat, Bernard; Parola, Philippe; Raoult, Didier

    2016-01-01

    Severe acute malnutrition (SAM) is associated with inadequate diet, low levels of plasma antioxidants and gut microbiota alterations. The link between gut redox and microbial alterations, however, remains unexplored. By sequencing the gut microbiomes of 79 children of varying nutritional status from three centers in Senegal and Niger, we found a dramatic depletion of obligate anaerobes in malnutrition. This was confirmed in an individual patient data meta-analysis including 107 cases and 77 controls from 5 different African and Asian countries. Specifically, several species of the Bacteroidaceae, Eubacteriaceae, Lachnospiraceae and Ruminococceae families were consistently depleted while Enterococcus faecalis, Escherichia coli and Staphylococcus aureus were consistently enriched. Further analyses on our samples revealed increased fecal redox potential, decreased total bacterial number and dramatic Methanobrevibacter smithii depletion. Indeed, M. smithii was detected in more than half of the controls but in none of the cases. No causality was demonstrated but, based on our results, we propose a unifying theory linking microbiota specificity, lacking anaerobes and archaea, to low antioxidant nutrients, and lower food conversion. PMID:27183876

  6. Simplifying study of fever's dramatic relief of autistic behavior.

    PubMed

    Good, Peter

    2017-02-01

    Dramatic relief of autistic behavior by infectious fever continues to tantalize parents and practitioners, yet researchers still hesitate to study its physiology/biochemistry, fearing stress and heat of brain imaging, contagion, and fever's complexity. Yet what could be more revealing than a common event that virtually 'normalizes' autistic behavior for a time? This paper proposes study of three simplified scenarios: (1) improvements appearing hours before fever, (2) return of autistic behavior soon after fever, (3) improvements persisting long after fever. Each scenario limits some risk - and some explanation - inviting triangulation of decisive factor(s) in relief and recurrence. Return of autistic behavior after fever may be most revealing. The complex mechanisms that generated fever have all abated; simpler cooling mechanisms prevail - how many plausible explanations can there be? The decisive factor in fever's benefit is concluded to be water drawn/carried from brain myelin and astrocytes by osmolytes glutamine and taurine released from muscles and brain; the decisive factor in return of autistic behavior after fever is return of water. Copyright © 2016 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  7. Coarse cluster enhancing collaborative recommendation for social network systems

    NASA Astrophysics Data System (ADS)

    Zhao, Yao-Dong; Cai, Shi-Min; Tang, Ming; Shang, Min-Sheng

    2017-10-01

    Traditional collaborative filtering based recommender systems for social network systems bring very high demands on time complexity due to computing similarities of all pairs of users via resource usages and annotation actions, which thus strongly suppresses recommending speed. In this paper, to overcome this drawback, we propose a novel approach, namely coarse cluster that partitions similar users and associated items at a high speed to enhance user-based collaborative filtering, and then develop a fast collaborative user model for the social tagging systems. The experimental results based on Delicious dataset show that the proposed model is able to dramatically reduce the processing time cost greater than 90 % and relatively improve the accuracy in comparison with the ordinary user-based collaborative filtering, and is robust for the initial parameter. Most importantly, the proposed model can be conveniently extended by introducing more users' information (e.g., profiles) and practically applied for the large-scale social network systems to enhance the recommending speed without accuracy loss.

  8. Point Reward System: A Method of Assessment That Accommodates a Diversity of Student Abilities and Interests and Enhances Learning

    ERIC Educational Resources Information Center

    Derado, Josip; Garner, Mary L.; Tran, Thu-Hang

    2016-01-01

    Students' abilities and interests vary dramatically in the college mathematics classroom. How do we teach all of these students effectively? In this paper, we present the Point Reward System (PRS), a new method of assessment that addresses this problem. We designed the PRS with three main goals in mind: to increase the retention rates; to keep all…

  9. An Artificial Functional Family Filter in Homolog Searching in Next-generation Sequencing Metagenomics

    PubMed Central

    Du, Ruofei; Mercante, Donald; Fang, Zhide

    2013-01-01

    In functional metagenomics, BLAST homology search is a common method to classify metagenomic reads into protein/domain sequence families such as Clusters of Orthologous Groups of proteins (COGs) in order to quantify the abundance of each COG in the community. The resulting functional profile of the community is then used in downstream analysis to correlate the change in abundance to environmental perturbation, clinical variation, and so on. However, the short read length coupled with next-generation sequencing technologies poses a barrier in this approach, essentially because similarity significance cannot be discerned by searching with short reads. Consequently, artificial functional families are produced, in which those with a large number of reads assigned decreases the accuracy of functional profile dramatically. There is no method available to address this problem. We intended to fill this gap in this paper. We revealed that BLAST similarity scores of homologues for short reads from COG protein members coding sequences are distributed differently from the scores of those derived elsewhere. We showed that, by choosing an appropriate score cut-off, we are able to filter out most artificial families and simultaneously to preserve sufficient information in order to build the functional profile. We also showed that, by incorporated application of BLAST and RPS-BLAST, some artificial families with large read counts can be further identified after the score cutoff filtration. Evaluated on three experimental metagenomic datasets with different coverages, we found that the proposed method is robust against read coverage and consistently outperforms the other E-value cutoff methods currently used in literatures. PMID:23516532

  10. Blood vessel classification into arteries and veins in retinal images

    NASA Astrophysics Data System (ADS)

    Kondermann, Claudia; Kondermann, Daniel; Yan, Michelle

    2007-03-01

    The prevalence of diabetes is expected to increase dramatically in coming years; already today it accounts for a major proportion of the health care budget in many countries. Diabetic Retinopathy (DR), a micro vascular complication very often seen in diabetes patients, is the most common cause of visual loss in working age population of developed countries today. Since the possibility of slowing or even stopping the progress of this disease depends on the early detection of DR, an automatic analysis of fundus images would be of great help to the ophthalmologist due to the small size of the symptoms and the large number of patients. An important symptom for DR are abnormally wide veins leading to an unusually low ratio of the average diameter of arteries to veins (AVR). There are also other diseases like high blood pressure or diseases of the pancreas with one symptom being an abnormal AVR value. To determine it, a classification of vessels as arteries or veins is indispensable. As to our knowledge despite the importance there have only been two approaches to vessel classification yet. Therefore we propose an improved method. We compare two feature extraction methods and two classification methods based on support vector machines and neural networks. Given a hand-segmentation of vessels our approach achieves 95.32% correctly classified vessel pixels. This value decreases by 10% on average, if the result of a segmentation algorithm is used as basis for the classification.

  11. Simultaneous determination of 30 ginsenosides in Panax ginseng preparations using ultra performance liquid chromatography

    PubMed Central

    Park, Hee-Won; In, Gyo; Han, Sung-Tai; Lee, Myoung-Woo; Kim, So-Young; Kim, Kyung-Tack; Cho, Byung-Goo; Han, Gyeong-Ho; Chang, Il-Moo

    2013-01-01

    A quick and simple method for simultaneous determination of the 30 ginsenosides (ginsenoside Ro, Rb1, Rb2, Rc, Rd, Re, Rf, Rg1, 20(S)-Rg2, 20(R)-Rg2, 20(S)-Rg3, 20(R)-Rg3, 20(S)-Rh1, 20(S)-Rh2, 20(R)-Rh2, F1, F2, F4, Ra1, Rg6, Rh4, Rk3, Rg5, Rk1, Rb3, Rk2, Rh3, compound Y, compound K, and notoginsenoside R1) in Panax ginseng preparations was developed and validated by an ultra performance liquid chromatography photo diode array detector. The separation of the 30 ginsenosides was efficiently undertaken on the Acquity BEH C-18 column with gradient elution with phosphoric acids. Especially the chromatogram of the ginsenoside Ro was dramatically enhanced by adding phosphoric acid. Under optimized conditions, the detection limits were 0.4 to 1.7 mg/L and the calibration curves of the peak areas for the 30 ginsenosides were linear over three orders of magnitude with a correlation coefficients greater than 0.999. The accuracy of the method was tested by a recovery measurement of the spiked samples which yielded good results of 89% to 118%. From these overall results, the proposed method may be helpful in the development and quality of P. ginseng preparations because of its wide range of applications due to the simultaneous analysis of many kinds of ginsenosides. PMID:24235860

  12. Development of a multi-matrix LC-MS/MS method for urea quantitation and its application in human respiratory disease studies.

    PubMed

    Wang, Jianshuang; Gao, Yang; Dorshorst, Drew W; Cai, Fang; Bremer, Meire; Milanowski, Dennis; Staton, Tracy L; Cape, Stephanie S; Dean, Brian; Ding, Xiao

    2017-01-30

    In human respiratory disease studies, liquid samples such as nasal secretion (NS), lung epithelial lining fluid (ELF), or upper airway mucosal lining fluid (MLF) are frequently collected, but their volumes often remain unknown. The lack of volume information makes it hard to estimate the actual concentration of recovered active pharmaceutical ingredient or biomarkers. Urea has been proposed to serve as a sample volume marker because it can freely diffuse through most body compartments and is less affected by disease states. Here, we report an easy and reliable LC-MS/MS method for cross-matrix measurement of urea in serum, plasma, universal transfer medium (UTM), synthetic absorptive matrix elution buffer 1 (SAMe1) and synthetic absorptive matrix elution buffer 2 (SAMe2) which are commonly sampled in human respiratory disease studies. The method uses two stable-isotope-labeled urea isotopologues, [ 15 N 2 ]-urea and [ 13 C, 15 N 2 ]-urea, as the surrogate analyte and the internal standard, respectively. This approach provides the best measurement consistency across different matrices. The analyte extraction was individually optimized in each matrix. Specifically in UTM, SAMe1 and SAMe2, the unique salting-out assisted liquid-liquid extraction (SALLE) not only dramatically reduces the matrix interferences but also improves the assay recovery. The use of an HILIC column largely increases the analyte retention. The typical run time is 3.6min which allows for high throughput analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Permeability estimations and frictional flow features passing through porous media comprised of structured microbeads

    NASA Astrophysics Data System (ADS)

    Shin, C.

    2017-12-01

    Permeability estimation has been extensively researched in diverse fields; however, methods that suitably consider varying geometries and changes within the flow region, for example, hydraulic fracture closing for several years, are yet to be developed. Therefore, in the present study a new permeability estimation method is presented based on the generalized Darcy's friction flow relation, in particular, by examining frictional flow parameters and characteristics of their variations. For this examination, computational fluid dynamics (CFD) simulations of simple hydraulic fractures filled with five layers of structured microbeads and accompanied by geometry changes and flow transitions are performed. Consequently, it was checked whether the main structures and shapes of each flow path are preserved, even for geometry variations within porous media. However, the scarcity and discontinuity of streamlines increase dramatically in the transient- and turbulent-flow regions. The quantitative and analytic examinations of the frictional flow features were also performed. Accordingly, the modified frictional flow parameters were successfully presented as similarity parameters of porous flows. In conclusion, the generalized Darcy's friction flow relation and friction equivalent permeability (FEP) equation were both modified using the similarity parameters. For verification, the FEP values of the other aperture models were estimated and then it was checked whether they agreed well with the original permeability values. Ultimately, the proposed and verified method is expected to efficiently estimate permeability variations in porous media with changing geometric factors and flow regions, including such instances as hydraulic fracture closings.

  14. Clinical Interviewing with Older Adults

    ERIC Educational Resources Information Center

    Mohlman, Jan; Sirota, Karen Gainer; Papp, Laszlo A.; Staples, Alison M.; King, Arlene; Gorenstein, Ethan E.

    2012-01-01

    Over the next few decades the older adult population will increase dramatically, and prevalence rates of psychiatric disorders are also expected to increase in the elderly cohort. These demographic projections highlight the need for diagnostic instruments and methods that are specifically tailored to older adults. The current paper discusses the…

  15. Medical subject heading (MeSH) annotations illuminate maize genetics and evolution

    USDA-ARS?s Scientific Manuscript database

    In the modern era, high-density marker panels and/or whole-genome sequencing,coupled with advanced phenotyping pipelines and sophisticated statistical methods, have dramatically increased our ability to generate lists of candidate genes or regions that are putatively associated with phenotypes or pr...

  16. Diversity Training with a Dramatic Flair.

    ERIC Educational Resources Information Center

    Wilson, Jennifer B.

    1998-01-01

    Describes the "social action theater" approach to diversity training in colleges and universities. The method uses one-act plays to present diversity issues in a way that creates conflict among characters and promotes audience discussion. Issues in using social-action theater are examined, including benefits, determining level of campus…

  17. Drama as a Tool in Education.

    ERIC Educational Resources Information Center

    Hardy, Sister Marie Paula

    Psychological, aesthetic, and pedagogical justifications for using drama in education are discussed. An examination of Dorothy Heathcote's method and a practical discussion of moves in beginning and continuing dramatic activity are discussed. Mrs. Heathcote is a Drama Staff Tutor, Institute of Education, University of Newcastle-Upon-Tyne. The…

  18. Dramatic action: A theater-based paradigm for analyzing human interactions

    PubMed Central

    Raindel, Noa; Alon, Uri

    2018-01-01

    Existing approaches to describe social interactions consider emotional states or use ad-hoc descriptors for microanalysis of interactions. Such descriptors are different in each context thereby limiting comparisons, and can also mix facets of meaning such as emotional states, short term tactics and long-term goals. To develop a systematic set of concepts for second-by-second social interactions, we suggest a complementary approach based on practices employed in theater. Theater uses the concept of dramatic action, the effort that one makes to change the psychological state of another. Unlike states (e.g. emotions), dramatic actions aim to change states; unlike long-term goals or motivations, dramatic actions can last seconds. We defined a set of 22 basic dramatic action verbs using a lexical approach, such as ‘to threaten’–the effort to incite fear, and ‘to encourage’–the effort to inspire hope or confidence. We developed a set of visual cartoon stimuli for these basic dramatic actions, and find that people can reliably and reproducibly assign dramatic action verbs to these stimuli. We show that each dramatic action can be carried out with different emotions, indicating that the two constructs are distinct. We characterized a principal valence axis of dramatic actions. Finally, we re-analyzed three widely-used interaction coding systems in terms of dramatic actions, to suggest that dramatic actions might serve as a common vocabulary across research contexts. This study thus operationalizes and tests dramatic action as a potentially useful concept for research on social interaction, and in particular on influence tactics. PMID:29518101

  19. Autoregressive modeling for the spectral analysis of oceanographic data

    NASA Technical Reports Server (NTRS)

    Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.

    1989-01-01

    Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.

  20. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.

  1. Coupled Thermo-Hydro-Mechanical Numerical Framework for Simulating Unconventional Formations

    NASA Astrophysics Data System (ADS)

    Garipov, T. T.; White, J. A.; Lapene, A.; Tchelepi, H.

    2016-12-01

    Unconventional deposits are found in all world oil provinces. Modeling these systems is challenging, however, due to complex thermo-hydro-mechanical processes that govern their behavior. As a motivating example, we consider in situ thermal processing of oil shale deposits. When oil shale is heated to sufficient temperatures, kerogen can be converted to oil and gas products over a relatively short timespan. This phase change dramatically impact both the mechanical and hydrologic properties of the rock, leading to strongly coupled THMC interactions. Here, we present a numerical framework for simulating tightly-coupled chemistry, geomechanics, and multiphase flow within a reservoir simulator (the AD-GPRS General Purpose Research Simulator). We model changes in constitutive behavior of the rock using a thermoplasticity model that accounts for microstructural evolution. The multi-component, multiphase flow and transport processes of both mass and heat are modeled at the macroscopic (e.g., Darcy) scale. The phase compositions and properties are described by a cubic equation of state; Arrhenius-type chemical reactions are used to represent kerogen conversion. The system of partial differential equations is discretized using a combination of finite-volumes and finite-elements, respectively, for the flow and mechanics problems. Fully implicit and sequentially implicit method are used to solve resulting nonlinear problem. The proposed framework is verified against available analytical and numerical benchmark cases. We demonstrate the efficiency, performance, and capabilities of the proposed simulation framework by analyzing near well deformation in an oil shale formation.

  2. "Stereo Compton cameras" for the 3-D localization of radioisotopes

    NASA Astrophysics Data System (ADS)

    Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.

    2014-11-01

    The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.

  3. A multistage framework for dismount spectral verification in the VNIR

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton

    2013-05-01

    A multistage algorithm suite is proposed for a specific target detection/verification scenario, where a visible/near infrared hyperspectral (HS) sample is assumed to be available as the only cue from a reference image frame. The target is a suspicious dismount. The suite first applies a biometric based human skin detector to focus the attention of the search. Using as reference all of the bands in the spectral cue, the suite follows with a Bayesian Lasso inference stage designed to isolate pixels representing the specific material type cued by the user and worn by the human target (e.g., hat, jacket). In essence, the search focuses on testing material types near skin pixels. The third stage imposes an additional constraint through RGB color quantization and distance metric checking, limiting even further the search for material types in the scene having visible color similar to the target visible color. Using the proposed cumulative evidence strategy produced some encouraging range-invariant results on real HS imagery, dramatically reducing to zero the false alarm rate on the example dataset. These results were in contrast to the results independently produced by each one of the suite's stages, as the spatial areas of each stage's high false alarm outcome were mutually exclusive in the imagery. These conclusions also apply to results produced by other standard methods, in particular the kernel SVDD (support vector data description) and matched filter, as shown in the paper.

  4. Orbits in the T Tauri triple system observed with SPHERE

    NASA Astrophysics Data System (ADS)

    Köhler, R.; Kasper, M.; Herbst, T. M.; Ratzka, T.; Bertrang, G. H.-M.

    2016-03-01

    Aims: We present new astrometric measurements of the components in the T Tauri system and derive new orbits and masses. Methods: T Tauri was observed during the science verification time of the new extreme adaptive optics facility SPHERE at the VLT. We combine the new positions with recalibrated NACO-measurements and data from the literature. Model fits for the orbits of T Tau Sa and Sb around each other and around T Tau N yield orbital elements and individual masses of the stars Sa and Sb. Results: Our new orbit for T Tau Sa/Sb is in good agreement with other recent results, which indicates that enough of the orbit has been observed for a reliable fit. The total mass of T Tau S is 2.65 ± 0.11 M⊙. The mass ratio MSb:MSa is 0.25 ± 0.03, which yields individual masses of MSa = 2.12 ± 0.10 M⊙ and MSb = 0.53 ± 0.06 M⊙. If our current knowledge of the orbital motions is used to compute the position of the southern radio source in the T Tauri system, then we find no evidence of the proposed dramatic change in its path. Based on observations collected at the European Southern Observatory, Chile, proposals number 070.C-0162, 072.C-0593, 074.C-0699, 074.C-0396, 078.C-0386, 380.C-0179, 382.C-0324, 60.A-9363 and 60.A-9364.

  5. Non-supervised method for early forest fire detection and rapid mapping

    NASA Astrophysics Data System (ADS)

    Artés, Tomás; Boca, Roberto; Liberta, Giorgio; San-Miguel, Jesús

    2017-09-01

    Natural hazards are a challenge for the society. Scientific community efforts have been severely increased assessing tasks about prevention and damage mitigation. The most important points to minimize natural hazard damages are monitoring and prevention. This work focuses particularly on forest fires. This phenomenon depends on small-scale factors and fire behavior is strongly related to the local weather. Forest fire spread forecast is a complex task because of the scale of the phenomena, the input data uncertainty and time constraints in forest fire monitoring. Forest fire simulators have been improved, including some calibration techniques avoiding data uncertainty and taking into account complex factors as the atmosphere. Such techniques increase dramatically the computational cost in a context where the available time to provide a forecast is a hard constraint. Furthermore, an early mapping of the fire becomes crucial to assess it. In this work, a non-supervised method for forest fire early detection and mapping is proposed. As main sources, the method uses daily thermal anomalies from MODIS and VIIRS combined with land cover map to identify and monitor forest fires with very few resources. This method relies on a clustering technique (DBSCAN algorithm) and on filtering thermal anomalies to detect the forest fires. In addition, a concave hull (alpha shape algorithm) is applied to obtain rapid mapping of the fire area (very coarse accuracy mapping). Therefore, the method leads to a potential use for high-resolution forest fire rapid mapping based on satellite imagery using the extent of each early fire detection. It shows the way to an automatic rapid mapping of the fire at high resolution processing as few data as possible.

  6. America's emphasis on welfare: is it children's welfare or corporate welfare?

    PubMed

    Raphel, Sally

    2003-01-01

    States are facing a collective budget shortfall of dollar 40 billion and may have to cut millions from programs that serve children and families. The 2004 proposal exacerbates the state fiscal crisis. Because of linkages between federal and state taxes, federal tax cuts have added to the loss of state revenue coming into states to assist with programs. In addition, the federal government is likely to cut human service funding to shore up its dollar 157 billion deficit, and many non-profit agencies are seeing donations slow dramatically because of the lagging economy and stock market It seems evident that the proposed federal budget cutbacks exacerbate state reductions in children's services, worsen the federal budget deficit, greatly increase the nation's debt, and mortgage America's future by passing the lack of investments in youth potential onto the next generation.

  7. When I know who "we" are, I can be "me": the primary role of cultural identity clarity for psychological well-being.

    PubMed

    Taylor, Donald M; Usborne, Esther

    2010-02-01

    Collective trauma, be it through colonization (e.g., Aboriginal Peoples), slavery (e.g., African Americans) or war, has a dramatic impact on the psychological well-being of each and every individual member of the collective. Thus, interventions are often conceptualized and delivered at the individual level with a view to minimizing the psychological disequilibrium of each individual. In contrast, we propose a theory of self that emphasizes the primacy of cultural identity for psychological well-being. We present a series of studies that illustrate the importance of cultural identity clarity for personal identity and for psychological well-being. Our theoretical model proposes that interventions aimed at clarifying cultural identity may play a constructive role in the promotion of the well-being of group members exposed to collective trauma.

  8. Low-frequency wideband vibration energy harvesting by using frequency up-conversion and quin-stable nonlinearity

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Zhang, Qichang; Wang, Wei

    2017-07-01

    This work presents models and experiments of an impact-driven and frequency up-converted wideband piezoelectric-based vibration energy harvester with a quintuple-well potential induced by the combination effect of magnetic nonlinearity and mechanical piecewise-linearity. Analysis shows that the interwell motions during coupled vibration period enable to increase electrical power output in comparison to conventional frequency up-conversion technology. Besides, the quintuple-well potential with shallower potential wells could extend the harvester's operating bandwidth to lower frequencies. Experiments demonstrate our proposed approach can dramatically boost the measured power of the energy harvester as much as 35 times while its lower cut-off frequency is two times lower than that of a conventional counterpart. These results reveal our proposed approach shows promise for powering portable wireless smart devices from low-intensity, low-frequency vibration sources.

  9. Credit Cards: What You Don't Know Can Cost You!

    ERIC Educational Resources Information Center

    Detweiler, Gerri

    1993-01-01

    The role of credit cards in personal finance has increased dramatically over the past two decades. Complex interest computation methods and additional fees often boost the price of credit card loans and help make credit cards the most profitable type of consumer loan for many lenders. (Author/JOW)

  10. Overview of Computer Simulation Modeling Approaches and Methods

    Treesearch

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  11. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  12. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  13. DEVELOPMENT OF AN EPA METHOD FOR PERFLUOROCALKYL COMPOUNDS IN DRINKING WATER

    EPA Science Inventory

    Perfluoroalkyl compounds (PFCs) have been manufactured for over 50 years and their use has dramatically increased over the years. Due to their unique properties of repelling both water and oil, PFCs have been used in a wide variety of applications. In 2001, identification of or...

  14. Proficiency Forms and Vocational Pedagogical Principles

    ERIC Educational Resources Information Center

    Inglar, Tron

    2014-01-01

    This article is based on research on experiential learning and vocational teachers. The author describes his analysis of curricula for the vocational teacher education and explains the education´s purpose, content, and methods. In 1975, education dramatically changed from an academic tradition with dissemination of many disciplines to a holistic…

  15. Methods for transcriptomic analyses of the porcine host immune response: application to Salmonella infection using microarrays

    USDA-ARS?s Scientific Manuscript database

    Technological developments in both the collection and analysis of molecular genetic data over the past few years have provided new opportunities for an improved understanding of the global response to pathogen exposure. Such developments are particularly dramatic for scientists studying the pig, whe...

  16. BULK AND TEMPLATE-FREE SYNTHESIS OF SILVER NANOWIRES USING CAFFEINE AT ROOM TEMPERATURE

    EPA Science Inventory

    A simple eco-friendly one-pot method is described to synthesize bulk quantities of nanowires of silver (Ag) using caffeine without the need of reducing agent, surfactants, and/or large amounts of insoluble templates. Chemical reduction of silver salts with caffeine dramatically c...

  17. Competency-Based Teaching of Shakespeare: How to Master "King Lear"

    ERIC Educational Resources Information Center

    Ribes, Purificación

    2011-01-01

    Shakespeare's hypotext has invited so many hypertextual transformations over the last four hundred years that twenty-first century students deserve the chance of digging into this rich mine of information and dramatic possibilities. The practical approach of a competency-based teaching method offers great advantages over traditional practices in…

  18. COMPARISON OF QPCR METHODS FOR THE DETECTION OF VITELLOGENIN EXPRESSION IN FATHEAD MINNOWS

    EPA Science Inventory

    Male fathead minnows (FHM) normally express little if any of the egg yolk precursor protein vitellogenin (Vg). However, when exposed to estrogenic compounds such as 17a-ethynylestradiol (EE2), transcriptional levels of Vg rise dramatically and result in decreased fecundity and i...

  19. Accommodating Students' Sensory Learning Modalities in Online Formats

    ERIC Educational Resources Information Center

    Allison, Barbara N.; Rehm, Marsha L.

    2016-01-01

    Online classes have become a popular and viable method of educating students in both K-12 settings and higher education, including in family and consumer sciences (FCS) programs. Online learning dramatically affects the way students learn. This article addresses how online learning can accommodate the sensory learning modalities (sight, hearing,…

  20. The Element of Drama in Strategic Interaction.

    ERIC Educational Resources Information Center

    Di Pietro, Robert J.

    The strategic interaction method is based on the principle that dramatic tension is the essential ingredient in second language learning, but unlike the drama built on audience spectatorship, classroom drama builds within each student involved in the interaction. Students take scenarios, thematically cohesive events, and create their own dialog as…

  1. After the In-Service Course Challenges of Technology Integration

    ERIC Educational Resources Information Center

    Frederick, Gregory R.; Schweizer, Heidi; Lowe, Robert

    2006-01-01

    This case study chronicles one teacher's experience in the semester after an in-service course, Using Technology for Instruction and Assessment. Results suggest that success in the course and good intentions do not necessarily translate into dramatic change in methods or media of instruction. Student mobility and special needs, unexpected…

  2. Retinoblastoma-like RRB gene of arabidopsis thaliana

    DOEpatents

    Durfee, Tim; Feiler, Heidi; Gruissem, Wilhelm; Jenkins, Susan; Roe, Judith; Zambryski, Patricia

    2004-02-24

    This invention provides methods and compositions for altering the growth, organization, and differentiation of plant tissues. The invention is based on the discovery that, in plants, genetically altering the levels of Retinoblastoma-related gene (RRB) activity produces dramatic effects on the growth, proliferation, organization, and differentiation of plant meristem.

  3. The Alzheimer Disease Protective Mutation A2T Modulates Kinetic and Thermodynamic Properties of Amyloid-β (Aβ) Aggregation*

    PubMed Central

    Benilova, Iryna; Gallardo, Rodrigo; Ungureanu, Andreea-Alexandra; Castillo Cano, Virginia; Snellinx, An; Ramakers, Meine; Bartic, Carmen; Rousseau, Frederic; Schymkowitz, Joost; De Strooper, Bart

    2014-01-01

    Missense mutations in alanine 673 of the amyloid precursor protein (APP), which corresponds to the second alanine of the amyloid β (Aβ) sequence, have dramatic impact on the risk for Alzheimer disease; A2V is causative, and A2T is protective. Assuming a crucial role of amyloid-Aβ in neurodegeneration, we hypothesized that both A2V and A2T mutations cause distinct changes in Aβ properties that may at least partially explain these completely different phenotypes. Using human APP-overexpressing primary neurons, we observed significantly decreased Aβ production in the A2T mutant along with an enhanced Aβ generation in the A2V mutant confirming earlier data from non-neuronal cell lines. More importantly, thioflavin T fluorescence assays revealed that the mutations, while having little effect on Aβ42 peptide aggregation, dramatically change the properties of the Aβ40 pool with A2V accelerating and A2T delaying aggregation of the Aβ peptides. In line with the kinetic data, Aβ A2T demonstrated an increase in the solubility at equilibrium, an effect that was also observed in all mixtures of the A2T mutant with the wild type Aβ40. We propose that in addition to the reduced β-secretase cleavage of APP, the impaired propensity to aggregate may be part of the protective effect conferred by A2T substitution. The interpretation of the protective effect of this mutation is thus much more complicated than proposed previously. PMID:25253695

  4. Insulin oedema and treatment-induced neuropathy occurring in a 20-year-old patient with Type 1 diabetes commenced on an insulin pump.

    PubMed

    Rothacker, K M; Kaye, J

    2014-01-01

    Oedema may occur following initiation or intensification of insulin therapy in patients with Type 1 and Type 2 diabetes. Mild oedema is thought to be not uncommon, but under-reported, whilst generalized oedema with involvement of serous cavities has rarely been described. Multiple pathogenic mechanisms have been proposed, including insulin-induced sodium and water retention. Patients at greater risk for insulin oedema include those with poor glycaemic control. Dramatic improvement in glycaemic control is also associated with sensory and autonomic neuropathy. We describe a case of generalized oedema occurring in a 20-year-old, low body weight patient with Type 1 diabetes with poor glycaemic control 3 days following commencement of an insulin pump; blood sugars had dramatically improved with this treatment. Alternative causes for oedema were excluded. Oedema slowly improved with insulin dose reduction with higher blood sugar targets plus frusemide treatment. Subsequent to oedema resolution, the patient unfortunately developed generalized neuropathic pain, thought to be another manifestation of rapid improvement in glycaemic control. Caution should be taken when a patient with diabetes that is poorly controlled has an escalation in therapy that may dramatically improve their blood sugar levels; this includes the initiation of an insulin pump. Clinicians and patients should be aware of the potential risk of insulin oedema, treatment-induced neuropathy and worsening of diabetic retinopathy in the setting of rapid improvement in glycaemic control. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.

  5. Social insect genomes exhibit dramatic evolution in gene composition and regulation while preserving regulatory features linked to sociality

    PubMed Central

    Simola, Daniel F.; Wissler, Lothar; Donahue, Greg; Waterhouse, Robert M.; Helmkampf, Martin; Roux, Julien; Nygaard, Sanne; Glastad, Karl M.; Hagen, Darren E.; Viljakainen, Lumi; Reese, Justin T.; Hunt, Brendan G.; Graur, Dan; Elhaik, Eran; Kriventseva, Evgenia V.; Wen, Jiayu; Parker, Brian J.; Cash, Elizabeth; Privman, Eyal; Childers, Christopher P.; Muñoz-Torres, Monica C.; Boomsma, Jacobus J.; Bornberg-Bauer, Erich; Currie, Cameron R.; Elsik, Christine G.; Suen, Garret; Goodisman, Michael A.D.; Keller, Laurent; Liebig, Jürgen; Rawls, Alan; Reinberg, Danny; Smith, Chris D.; Smith, Chris R.; Tsutsui, Neil; Wurm, Yannick; Zdobnov, Evgeny M.; Berger, Shelley L.; Gadau, Jürgen

    2013-01-01

    Genomes of eusocial insects code for dramatic examples of phenotypic plasticity and social organization. We compared the genomes of seven ants, the honeybee, and various solitary insects to examine whether eusocial lineages share distinct features of genomic organization. Each ant lineage contains ∼4000 novel genes, but only 64 of these genes are conserved among all seven ants. Many gene families have been expanded in ants, notably those involved in chemical communication (e.g., desaturases and odorant receptors). Alignment of the ant genomes revealed reduced purifying selection compared with Drosophila without significantly reduced synteny. Correspondingly, ant genomes exhibit dramatic divergence of noncoding regulatory elements; however, extant conserved regions are enriched for novel noncoding RNAs and transcription factor–binding sites. Comparison of orthologous gene promoters between eusocial and solitary species revealed significant regulatory evolution in both cis (e.g., Creb) and trans (e.g., fork head) for nearly 2000 genes, many of which exhibit phenotypic plasticity. Our results emphasize that genomic changes can occur remarkably fast in ants, because two recently diverged leaf-cutter ant species exhibit faster accumulation of species-specific genes and greater divergence in regulatory elements compared with other ants or Drosophila. Thus, while the “socio-genomes” of ants and the honeybee are broadly characterized by a pervasive pattern of divergence in gene composition and regulation, they preserve lineage-specific regulatory features linked to eusociality. We propose that changes in gene regulation played a key role in the origins of insect eusociality, whereas changes in gene composition were more relevant for lineage-specific eusocial adaptations. PMID:23636946

  6. Characterizing Spatial and Temporal Patterns of Thermal Environment and Air Quality in Taipei Metropolitan Area

    NASA Astrophysics Data System (ADS)

    Juang, J. Y.; Sun, C. H.; Jiang, J. A.; Wen, T. H.

    2017-12-01

    The urban heat island effect (UHI) caused by the regional-to-global environmental changes, dramatic urbanization, and shifting in land-use compositions has becoming an important environmental issue in recent years. In the past century, the coverage of urban area in Taipei Basin has dramatically increasing by ten folds. The strengthen of UHI effect significantly enhances the frequency of warm-night effect, and strongly influences the thermal environment of the residents in the Greater Taipei Metropolitan. In addition, the urban expansions due to dramatic increasing in urban populations and traffic loading significantly impacts the air quality and causes health issue in Taipei. In this study, the main objective is to quantify and characterize the temporal and spatial distributions of thermal environmental and air quality in the Greater Taipei Metropolitan Area by using monitoring data from Central Weather Bureau, Environmental Protection Administration. In addition, in this study, we conduct the analysis on the distribution of physiological equivalent temperature in the micro scale in the metropolitan area by using the observation data and quantitative simulation to investigate how the thermal environment is influenced under different conditions. Furthermore, we establish a real-time mobile monitoring system by using wireless sensor network to investigate the correlation between the thermal environment, air quality and other environmental factors, and propose to develop the early warning system for heat stress and air quality in the metropolitan area. The results from this study can be integrated into the management and planning system, and provide sufficient and important background information for the development of smart city in the metropolitan area in the future.

  7. Analysis of Preconditioning and Relaxation Operators for the Discontinuous Galerkin Method Applied to Diffusion

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Shu, Chi-Wang

    2001-01-01

    The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.

  8. High efficiency labeling of glycoproteins on living cells

    PubMed Central

    Zeng, Ying; Ramya, T. N. C.; Dirksen, Anouk; Dawson, Philip E.; Paulson, James C.

    2010-01-01

    We describe a simple method for efficiently labeling cell surface glycans on virtually any living animal cell. The method employs mild Periodate oxidation to generate an aldehyde on sialic acids, followed by Aniline-catalyzed oxime Ligation with a suitable tag (PAL). Aniline catalysis dramatically accelerates oxime ligation, allowing use of low concentrations of aminooxy-biotin at neutral pH to label the majority of cell surface glycoproteins while maintaining high cell viability. PMID:19234450

  9. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  10. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    PubMed Central

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649

  11. TP53 dysfunction in CLL: Implications for prognosis and treatment.

    PubMed

    Te Raa, Gera D; Kater, Arnon P

    2016-03-01

    Despite the availability of novel targeted agents, TP53 defects remain the most important adverse prognostic factor in chronic lymphocytic leukemia (CLL). Detection of deletion of TP53 locus (17p deletion) by fluorescent in situ hybridization (FISH) has become standard and performed prior to every line of treatment as the incidence dramatically increases as relapses occur. As monoallelic mutations of TP53 equally affect outcome, novel methods are being developed to improve detection of TP53 defects and include next-generation sequencing (NGS) and functional assays. TP53 defects highly affect outcome of immunochemotherapy but also alter response durations of tyrosine kinase inhibitors. Although BCR-targeting agents and Bcl-2-inhibitos have achieved durable responses in some patients with TP53 defects, long-term follow-up is currently lacking. In this review biological and clinical consequences of TP53 dysfunction as well as applicability of currently available methods to detect TP53 defects are described. In addition, proposed novel therapeutic strategies specifically for patients with TP53 dysfunction are discussed. In summary, the only curative treatment option for TP53-defective CLL is still allogeneic hematopoietic stem cell transplantation. Other treatment strategies such as rationale combinations of agents with different (TP53 independent) targets, including kinase inhibitors and inhibitors of anti-apoptotic molecules but also immunomodulatory agents need to be further explored. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A secure transmission scheme of streaming media based on the encrypted control message

    NASA Astrophysics Data System (ADS)

    Li, Bing; Jin, Zhigang; Shu, Yantai; Yu, Li

    2007-09-01

    As the use of streaming media applications increased dramatically in recent years, streaming media security becomes an important presumption, protecting the privacy. This paper proposes a new encryption scheme in view of characteristics of streaming media and the disadvantage of the living method: encrypt the control message in the streaming media with the high security lever and permute and confuse the data which is non control message according to the corresponding control message. Here the so-called control message refers to the key data of the streaming media, including the streaming media header and the header of the video frame, and the seed key. We encrypt the control message using the public key encryption algorithm which can provide high security lever, such as RSA. At the same time we make use of the seed key to generate key stream, from which the permutation list P responding to GOP (group of picture) is derived. The plain text of the non-control message XORs the key stream and gets the middle cipher text. And then obtained one is permutated according to P. In contrast the decryption process is the inverse process of the above. We have set up a testbed for the above scheme and found our scheme is six to eight times faster than the conventional method. It can be applied not only between PCs but also between handheld devices.

  13. Optimal statistical damage detection and classification in an experimental wind turbine blade using minimum instrumentation

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2017-04-01

    The increasing demand for carbon neutral energy in a challenging economic environment is a driving factor for erecting ever larger wind turbines in harsh environments using novel wind turbine blade (WTBs) designs characterized by high flexibilities and lower buckling capacities. To counteract resulting increasing of operation and maintenance costs, efficient structural health monitoring systems can be employed to prevent dramatic failures and to schedule maintenance actions according to the true structural state. This paper presents a novel methodology for classifying structural damages using vibrational responses from a single sensor. The method is based on statistical classification using Bayes' theorem and an advanced statistic, which allows controlling the performance by varying the number of samples which represent the current state. This is done for multivariate damage sensitive features defined as partial autocorrelation coefficients (PACCs) estimated from vibrational responses and principal component analysis scores from PACCs. Additionally, optimal DSFs are composed not only for damage classification but also for damage detection based on binary statistical hypothesis testing, where features selections are found with a fast forward procedure. The method is applied to laboratory experiments with a small scale WTB with wind-like excitation and non-destructive damage scenarios. The obtained results demonstrate the advantages of the proposed procedure and are promising for future applications of vibration-based structural health monitoring in WTBs.

  14. An enzyme-free and label-free surface plasmon resonance biosensor for ultrasensitive detection of fusion gene based on DNA self-assembly hydrogel with streptavidin encapsulation.

    PubMed

    Guo, Bin; Wen, Bo; Cheng, Wei; Zhou, Xiaoyan; Duan, Xiaolei; Zhao, Min; Xia, Qianfeng; Ding, Shijia

    2018-07-30

    In this research, an enzyme-free and label-free surface plasmon resonance (SPR) biosensing strategy has been developed for ultrasensitive detection of fusion gene based on the heterogeneous target-triggered DNA self-assembly aptamer-based hydrogel with streptavidin (SA) encapsulation. In the presence of target, the capture probes (Cp) immobilized on the chip surface can capture the PML/RARα, forming a Cp-PML/RARα duplex. After that, the aptamer-based network hydrogel nanostructure is formed on the gold surface via target-triggered self-assembly of X shaped polymers. Subsequently, the SA can be encapsulated into hydrogel by the specific binding of SA aptamer, forming the complex with super molecular weight. Thus, the developed strategy achieves dramatic enhancement of the SPR signal. Using PML/RARα "S" subtype as model analyte, the developed biosensing method can detect target down to 45.22 fM with a wide linear range from 100 fM to 10 nM. Moreover, the high efficiency biosensing method shows excellent practical ability to identify the clinical PCR products of PML/RARα. Thus, this proposed strategy presents a powerful platform for ultrasensitive detection of fusion gene and early diagnosis and monitoring of disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Bayesian analysis of biogeography when the number of areas is large.

    PubMed

    Landis, Michael J; Matzke, Nicholas J; Moore, Brian R; Huelsenbeck, John P

    2013-11-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a "data-augmentation" approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea.

  16. Fabrication of dopamine-modified hyaluronic acid/chitosan multilayers on titanium alloy by layer-by-layer self-assembly for promoting osteoblast growth

    NASA Astrophysics Data System (ADS)

    Zhang, Xinming; Li, Zhaoyang; Yuan, Xubo; Cui, Zhenduo; Yang, Xianjin

    2013-11-01

    The bare inert surface of titanium (Ti) alloy typically causes early failures in implants. Layer-by-layer self-assembly is one of the simple methods for fabricating bioactive multilayer coatings on titanium implants. In this study, a dopamine-modified hyaluronic acid/chitosan (DHA/CHI) bioactive multilayer was built on the surface of Ti-24Nb-2Zr (TNZ) alloy. Zeta potential oscillated between -2 and 17 mV for DHA- and CHI-ending layers during the assembly process, respectively. The DHA/CHI multilayer considerably decreased the contact angle and dramatically improved the wettability of TNZ alloy. Atomic force microscopy results revealed a rough surface on the original TNZ alloy, while the surface became smoother and more homogeneous after the deposition of approximately 5 bilayers (TNZ/(DHA/CHI)5). X-ray photoelectron spectroscopy analysis indicated that the TNZ/(DHA/CHI)5 sample was completely covered by polyelectrolytes. Pre-osteoblast MC3T3-E1 cells were cultured on the original TNZ alloy and TNZ/(DHA/CHI)5 to evaluate the effects of DHA/CHI multilayer on osteoblast proliferation in vitro. The proliferation of osteoblasts on TNZ/(DHA/CHI)5 was significantly higher than that on the original TNZ alloy. The results of this study indicate that the proposed technique improves the biocompatibility of TNZ alloy and can serve as a potential modification method in orthopedic applications.

  17. Using Gamma ray and Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) to Evaluate Elemental Sequences in Cap-carbonates and Cap-like Carbonates of the Death Valley Region

    NASA Astrophysics Data System (ADS)

    Holter, S. A.; Theissen, K. M.; Hickson, T. A.; Bostick, B.

    2004-12-01

    The Snowball Earth theory of Hoffman et al. (1998) proposes dramatic post-glacial chemical weathering as large concentrations of carbon were removed from the atmosphere. This would result in a large input of terrigenous material into the oceans; hence, we might expect that carbonates formed under these conditions would demonstrate elevated K, U, Th levels in comparison to carbonates formed under more typical conditions. In January of 2004 we collected spectral gamma data (K, U, Th) and hand samples from cap carbonates (Noonday Dolomite) and cap-like carbonates (Beck Spring Dolomite) of the Death Valley region in order to explore elemental changes in post-snowball Earth oceans. Based on our spectral gamma results, Th/U ratio trends suggested variations in the oxidation state of the Precambrian ocean. We pursued further investigations of trace elements to ascertain the reliability of these results by using ICP-OES. A suite of 25 trace elements was measured, most notably including U and Th. The ICP-OES data not only allow us to compare elemental changes between cap-carbonates and cap-like carbonates, but they also allow for a comparison of optical emission spectrometry and hand held gamma spectrometry methods. Both methods show similar trends in U and Th values for both the cap-carbonates and cap-like carbonates.

  18. Cavitation and radicals drive the sonochemical synthesis of functional polymer spheres

    DOE PAGES

    Narayanan, Badri; Deshmukh, Sanket A.; Shrestha, Lok Kumar; ...

    2016-07-25

    Sonochemical synthesis can lead to a dramatic increase in the kinetics of formation of polymer spheres (templates for carbon spheres) compared to the modified Stober silica method applied to produce analogous polymer spheres. Reactive molecular dynamics simulations of the sonochemical process indicate a significantly enhanced rate of polymer sphere formation starting from resorcinol and formaldehyde precursors. The associated chemical reaction kinetics enhancement due to sonication is postulated to arise from the localized lowering of atomic densities, localized heating, and generation of radicals due to cavitation collapse in aqueous systems. This dramatic increase in reaction rates translates into enhanced nucleation andmore » growth of the polymer spheres. Finally, the results are of broad significance to understanding mechanisms of sonication induced synthesis as well as technologies utilizing polymers spheres.« less

  19. Cavitation and radicals drive the sonochemical synthesis of functional polymer spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, Badri, E-mail: bnarayanan@anl.gov; Deshmukh, Sanket A.; Sankaranarayanan, Subramanian K. R. S., E-mail: ssankaranarayanan@anl.gov

    2016-07-25

    Sonochemical synthesis can lead to a dramatic increase in the kinetics of formation of polymer spheres (templates for carbon spheres) compared to the modified Stöber silica method applied to produce analogous polymer spheres. Reactive molecular dynamics simulations of the sonochemical process indicate a significantly enhanced rate of polymer sphere formation starting from resorcinol and formaldehyde precursors. The associated chemical reaction kinetics enhancement due to sonication is postulated to arise from the localized lowering of atomic densities, localized heating, and generation of radicals due to cavitation collapse in aqueous systems. This dramatic increase in reaction rates translates into enhanced nucleation andmore » growth of the polymer spheres. The results are of broad significance to understanding mechanisms of sonication induced synthesis as well as technologies utilizing polymers spheres.« less

  20. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

Top