Ong, Eng Teo; Lee, Heow Pueh; Lim, Kian Meng
2004-09-01
This article presents a fast algorithm for the efficient solution of the Helmholtz equation. The method is based on the translation theory of the multipole expansions. Here, the speedup comes from the convolution nature of the translation operators, which can be evaluated rapidly using fast Fourier transform algorithms. Also, the computations of the translation operators are accelerated by using the recursive formulas developed recently by Gumerov and Duraiswami [SIAM J. Sci. Comput. 25, 1344-1381(2003)]. It is demonstrated that the algorithm can produce good accuracy with a relatively low order of expansion. Efficiency analyses of the algorithm reveal that it has computational complexities of O(Na), where a ranges from 1.05 to 1.24. However, this method requires substantially more memory to store the translation operators as compared to the fast multipole method. Hence, despite its simplicity in implementation, this memory requirement issue may limit the application of this algorithm to solving very large-scale problems.
Influenza-like illness surveillance on Twitter through automated learning of naïve language.
Gesualdo, Francesco; Stilo, Giovanni; Agricola, Eleonora; Gonfiantini, Michaela V; Pandolfi, Elisabetta; Velardi, Paola; Tozzi, Alberto E
2013-01-01
Twitter has the potential to be a timely and cost-effective source of data for syndromic surveillance. When speaking of an illness, Twitter users often report a combination of symptoms, rather than a suspected or final diagnosis, using naïve, everyday language. We developed a minimally trained algorithm that exploits the abundance of health-related web pages to identify all jargon expressions related to a specific technical term. We then translated an influenza case definition into a Boolean query, each symptom being described by a technical term and all related jargon expressions, as identified by the algorithm. Subsequently, we monitored all tweets that reported a combination of symptoms satisfying the case definition query. In order to geolocalize messages, we defined 3 localization strategies based on codes associated with each tweet. We found a high correlation coefficient between the trend of our influenza-positive tweets and ILI trends identified by US traditional surveillance systems.
Influenza-Like Illness Surveillance on Twitter through Automated Learning of Naïve Language
Gesualdo, Francesco; Stilo, Giovanni; Agricola, Eleonora; Gonfiantini, Michaela V.; Pandolfi, Elisabetta; Velardi, Paola; Tozzi, Alberto E.
2013-01-01
Twitter has the potential to be a timely and cost-effective source of data for syndromic surveillance. When speaking of an illness, Twitter users often report a combination of symptoms, rather than a suspected or final diagnosis, using naïve, everyday language. We developed a minimally trained algorithm that exploits the abundance of health-related web pages to identify all jargon expressions related to a specific technical term. We then translated an influenza case definition into a Boolean query, each symptom being described by a technical term and all related jargon expressions, as identified by the algorithm. Subsequently, we monitored all tweets that reported a combination of symptoms satisfying the case definition query. In order to geolocalize messages, we defined 3 localization strategies based on codes associated with each tweet. We found a high correlation coefficient between the trend of our influenza-positive tweets and ILI trends identified by US traditional surveillance systems. PMID:24324799
The Confusion Assessment Method (CAM): A Systematic Review of Current Usage
Wei, Leslie A.; Fearing, Michael A.; Sternberg, Eliezer J.; Inouye, Sharon K.
2008-01-01
Objectives To examine the psychometric properties, adaptations, translations, and applications of the Confusion Assessment Method (CAM), a widely-used instrument and diagnostic algorithm for identification of delirium. Design Systematic literature review Setting NA Measurements Electronic searches of PubMED, EMBASE, PsychINFO, CINAHL, Ageline, and Google Scholar, augmented by reviews of reference listings, were conducted to identify original English-language articles utilizing the CAM from January 1, 1991 to December 31, 2006. Two reviewers independently abstracted key information from each article. Participants NA Results Of 239 original articles, 10 (4%) were categorized as validation studies, 16 (7%) as adaptations; 12 (5%) as translations, and 222 (93%) as applications. Validation studies evaluated performance of the CAM against a reference standard. Results were combined across 7 high quality studies (n=1071), demonstrating an overall sensitivity of 94% (95% confidence interval, CI, 91–97%), and specificity of 89% (95% CI, 85–94%). CAM has been adapted for use in ICU, emergency, and institutional settings, and for scoring severity and subsyndromal delirium. CAM has been translated into 10 languages where published articles are available. In application studies, CAM-rated delirium is most commonly used as a risk factor or outcome, but also as an intervention or reference standard. Conclusions The CAM has helped to improve identification of delirium in clinical and research settings. To optimize performance, the CAM should be scored based on observations made during formal cognitive testing, and training is recommended. Future action is needed to optimize use of the CAM and to improve the recognition and management of delirium. PMID:18384586
Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul
2013-12-07
Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.
NASA Astrophysics Data System (ADS)
Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul
2013-12-01
Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.
Ionic regulation of the biosynthesis of NaK-ATPase subunits.
McDonough, A A; Tang, M J; Lescale-Matys, L
1990-07-01
In this review we have summarized the work of ourselves and others on ionic and hormonal regulation of synthesis of the sodium pump. No one central theme emerges from this summary. Rather, it appears that abundance can be regulated pre-translationally or posttranslationally. As reviewed recently, regulation of the expression of the beta glycoprotein subunit, which has no described enzymatic function, can regulate holoenzyme expression. In the kidney this is exemplified in our studies in LLC-PK1 cells and proximal tubule cells where pre-translational regulation of beta expression is key to increasing holoenzyme abundance, and also exemplified in the hypothyroid renal cortex where regulation of beta protein abundance post-translationally appears to impact the abundance of enzymatically active NaK-ATPase. Future studies in the field of ionic regulation of NaK-ATPase must be directed at elucidating the signals that mediate the response, and at how these signals alter the NaK-ATPase biosynthetic pathway from expression of alpha and beta genes, through to turnover of the mature NaK-ATPase heterodimer.
Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Lomunscio, Alessio
2004-01-01
We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.
Translational bioinformatics: linking the molecular world to the clinical world.
Altman, R B
2012-06-01
Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual
NASA Astrophysics Data System (ADS)
Medgyesimitschang, L. N.; Putnam, J. M.
1982-05-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.
Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems
2007-05-01
these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP
Fast decoder for local quantum codes using Groebner basis
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2013-03-01
Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.
Pohit, M; Sharma, J
2015-05-10
Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.
Tramontano, A; Macchiato, M F
1986-01-01
An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761
Translation and integration of numerical atomic orbitals in linear molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinäsmäki, Sami, E-mail: sami.heinasmaki@gmail.com
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
BPF-type region-of-interest reconstruction for parallel translational computed tomography.
Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin
2017-01-01
The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
Analysis of Naïve Bayes Algorithm for Email Spam Filtering across Multiple Datasets
NASA Astrophysics Data System (ADS)
Fitriah Rusland, Nurul; Wahid, Norfaradilla; Kasim, Shahreen; Hafit, Hanayanti
2017-08-01
E-mail spam continues to become a problem on the Internet. Spammed e-mail may contain many copies of the same message, commercial advertisement or other irrelevant posts like pornographic content. In previous research, different filtering techniques are used to detect these e-mails such as using Random Forest, Naïve Bayesian, Support Vector Machine (SVM) and Neutral Network. In this research, we test Naïve Bayes algorithm for e-mail spam filtering on two datasets and test its performance, i.e., Spam Data and SPAMBASE datasets [8]. The performance of the datasets is evaluated based on their accuracy, recall, precision and F-measure. Our research use WEKA tool for the evaluation of Naïve Bayes algorithm for e-mail spam filtering on both datasets. The result shows that the type of email and the number of instances of the dataset has an influence towards the performance of Naïve Bayes.
Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.
Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2016-04-01
To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.
On-Demand Associative Cross-Language Information Retrieval
NASA Astrophysics Data System (ADS)
Geraldo, André Pinto; Moreira, Viviane P.; Gonçalves, Marcos A.
This paper proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyse market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using queries in two distinct languages: Portuguese and Finnish to retrieve documents in English. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. The combination between machine translation and our approach yielded the best results, even outperforming the monolingual baseline.
A Novel Latin Hypercube Algorithm via Translational Propagation
Pan, Guang; Ye, Pengcheng
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844
Chunk Alignment for Corpus-Based Machine Translation
ERIC Educational Resources Information Center
Kim, Jae Dong
2011-01-01
Since sub-sentential alignment is critically important to the translation quality of an Example-Based Machine Translation (EBMT) system, which operates by finding and combining phrase-level matches against the training examples, we developed a new alignment algorithm for the purpose of improving the EBMT system's performance. This new…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary
Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less
Garcia-Bernabé, A; Lidón-Roger, J V; Sanchis, M J; Díaz-Calleja, R; del Castillo, L F
2015-10-01
The dielectric and mechanical spectroscopies of acetate of cis- and trans-2-phenyl-5-hydroxymethyl-1,3-dioxane are reported in the frequency domain from 10(-2) to 10(6)Hz. This ester has been selected in this study for its predominant α relaxation with regard to the β relaxation, which can be neglected. This study consists of determining an interconversion algorithm between dielectric and mechanical measurements, given by using a relation between rotational and translational complex viscosities. These important viscosities were obtained from measures of the dielectric complex permittivity and by dynamic mechanical analysis, respectively. The definitions of rotational and translational viscosities were evaluated by means of fractional calculus, by using the fit parameters of the Havriliak-Negami empirical model obtained in the dielectric and mechanical characterization of the α relaxation. This interconversion algorithm is a generalization of the break of the Stokes-Einstein-Debye relationship. It uses a power law with an exponent defined as the shape factor, which modifies the translational viscosity. Two others factors are introduced for the interconversion, a shift factor, which displaces the translational viscosity in the frequency domain, and a scale factor, which makes equal values of the two viscosities. In this paper, the shape factor has been identified as the relation between the slopes of the moduli of the complex viscosities at higher frequency. This is interpreted as the degree of kinetic coupling between the molecular rotation and translational movements. Alternatively, another interconversion algorithm has been expressed by means of dielectric and mechanical moduli.
Surkis, Alisa; Hogle, Janice A; DiazGranados, Deborah; Hunt, Joe D; Mazmanian, Paul E; Connors, Emily; Westaby, Kate; Whipple, Elizabeth C; Adamus, Trisha; Mueller, Meridith; Aphinyanaphongs, Yindalon
2016-08-05
Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications. Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier. The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.
Assessment of the information content of patterns: an algorithm
NASA Astrophysics Data System (ADS)
Daemi, M. Farhang; Beurle, R. L.
1991-12-01
A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.
Translations on Eastern Europe, Scientific Affairs, Number 603
1978-10-11
AFFAIRS No. 603 CONTENTS PAGE BULGARIA Achievements in Developing High- Yield Plant Varieties Outlined • (Khristo Daskalov; SPISANIE NA...ACHIEVEMENTS III DEVELOPING HIGH- YIELD PLANT VARIETIES OUTLINED Sofia SPISANIE NA BULGARSKATA AKADEMIYA NA NAUKITE in Bulgarian No 3, 1978 PP 5-10...nations of the world, our varieties Yubileyna and Sadovo-1 were in first place in terms of the yield , and this was a great accom- plishment for our
Commercial scale cucumber fermentations brined with calcium chloride instead of sodium chloride
USDA-ARS?s Scientific Manuscript database
Development of low salt cucumber fermentation processes present opportunities to reduce the amount of sodium chloride (NaCl) that reaches fresh water streams from industrial activities. The objective of this research was to translate cucumber fermentation brined with calcium chloride instead of NaCl...
Ganapathiraju, Madhavi K; Orii, Naoki
2013-08-30
Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed class of algorithms, namely inference-analytic algorithms, is necessary to ensure that resources are invested in translating those computational outcomes that promise maximum biological impact. Application of this concept to predict biomedical impact of PPIs illustrates not only the concept, but also the challenges in designing these algorithms.
Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study
ERIC Educational Resources Information Center
Cer, Daniel
2011-01-01
The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…
Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.
Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme
2014-03-01
Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botschwina, P.; Meyer, W.; Hertel, I.V.
Potential energy surfaces have been calculated for the four lowest electronic states of Na (3 /sup 2/S, 3 /sup 2/P)+H/sub 2/(/sup 1/..sigma../sup +//sub g/) by means of the RHF--SCF and PNO--CEPA methods. For the so-called quenching process of Na (3 /sup 2/P) by H/sub 2/ at low initial translational energies (E--VRT energy transfer) the energetically most favorable path occurs in C/sub 2v/ symmetry, since: at intermediate Na--H/sub 2/ separation: the A /sup 2/B/sub 2/ potential energy surface is attractive. From the CEPA calculations, the crossing point of minimal energy between the X /sup 2/A/sub 1/ and A /sup 2/B/sub 2/more » surfaces is obtained at R/sub c/ = 3.57 a.u. and r/sub c/ = 2.17 a.u. with an energy difference to the asymptotic limit (R = infinity, r = r/sub e/) of -0.06 eV. It is thus classically accessible without any initial translational energy, but at low initial translational energies (approx.0.1 eV) quenching will be efficient only for arrangements of collision partners close to C/sub 2v/ symmetry. There is little indication of an avoiding crossing with an ionic intermediate correlating asymptotically with Na/sup +/ and H/sub 2//sup -/ as was assumed in previous discussions of the quenching process. The dependence of the total quenching cross sections on the initial translational energy is discussed by means of the ''absorbing sphere'' model, taking the initial zero-point vibrational energy of the hydrogen molecule into account. New experimental data of the product channel distribution in H/sub 2/ for center-of-mass forward scattering are presented. The final vibrational states v' = 3, 2, 1, and 0 of H/sub 2/ are populated to about 26%, 61%, 13%, and 0%, respectively. The observed distributions in H/sub 2/ (and D/sub 2/) may be rationalized by simple dynamic considerations on the basis of the calculated surfaces.« less
Sensor Network Localization by Eigenvector Synchronization Over the Euclidean Group
CUCURINGU, MIHAI; LIPMAN, YARON; SINGER, AMIT
2013-01-01
We present a new approach to localization of sensors from noisy measurements of a subset of their Euclidean distances. Our algorithm starts by finding, embedding, and aligning uniquely realizable subsets of neighboring sensors called patches. In the noise-free case, each patch agrees with its global positioning up to an unknown rigid motion of translation, rotation, and possibly reflection. The reflections and rotations are estimated using the recently developed eigenvector synchronization algorithm, while the translations are estimated by solving an overdetermined linear system. The algorithm is scalable as the number of nodes increases and can be implemented in a distributed fashion. Extensive numerical experiments show that it compares favorably to other existing algorithms in terms of robustness to noise, sparse connectivity, and running time. While our approach is applicable to higher dimensions, in the current article, we focus on the two-dimensional case. PMID:23946700
Fundamental limits of reconstruction-based superresolution algorithms under local translation.
Lin, Zhouchen; Shum, Heung-Yeung
2004-01-01
Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.
NASA Astrophysics Data System (ADS)
Medgyesi-Mitschang, L. N.; Putnam, J. M.
1980-04-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.
Predicting translational deformity following opening-wedge osteotomy for lower limb realignment.
Barksfield, Richard C; Monsell, Fergal P
2015-11-01
An opening-wedge osteotomy is well recognised for the management of limb deformity and requires an understanding of the principles of geometry. Translation at the osteotomy is needed when the osteotomy is performed away from the centre of rotation of angulation (CORA), but the amount of translation varies with the distance from the CORA. This translation enables proximal and distal axes on either side of the proposed osteotomy to realign. We have developed two experimental models to establish whether the amount of translation required (based on the translation deformity created) can be predicted based upon simple trigonometry. A predictive algorithm was derived where translational deformity was predicted as 2(tan α × d), where α represents 50 % of the desired angular correction, and d is the distance of the desired osteotomy site from the CORA. A simulated model was developed using TraumaCad online digital software suite (Brainlab AG, Germany). Osteotomies were simulated in the distal femur, proximal tibia and distal tibia for nine sets of lower limb scanograms at incremental distances from the CORA and the resulting translational deformity recorded. There was strong correlation between the distance of the osteotomy from the CORA and simulated translation deformity for distal femoral deformities (correlation coefficient 0.99, p < 0.0001), proximal tibial deformities (correlation coefficient 0.93-0.99, p < 0.0001) and distal tibial deformities (correlation coefficient 0.99, p < 0.0001). There was excellent agreement between the predictive algorithm and simulated translational deformity for all nine simulations (correlation coefficient 0.93-0.99, p < 0.0001). Translational deformity following corrective osteotomy for lower limb deformity can be anticipated and predicted based upon the angular correction and the distance between the planned osteotomy site and the CORA.
The PlusCal Algorithm Language
NASA Astrophysics Data System (ADS)
Lamport, Leslie
Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.
Surgical motion characterization in simulated needle insertion procedures
NASA Astrophysics Data System (ADS)
Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor
2012-02-01
PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Improved numerical methods for infinite spin chains with long-range interactions
NASA Astrophysics Data System (ADS)
Nebendahl, V.; Dür, W.
2013-02-01
We present several improvements of the infinite matrix product state (iMPS) algorithm for finding ground states of one-dimensional quantum systems with long-range interactions. As a main ingredient, we introduce the superposed multioptimization method, which allows an efficient optimization of exponentially many MPS of different lengths at different sites all in one step. Here, the algorithm becomes protected against position-dependent effects as caused by spontaneously broken translational invariance. So far, these have been a major obstacle to convergence for the iMPS algorithm if no prior knowledge of the system's translational symmetry was accessible. Further, we investigate some more general methods to speed up calculations and improve convergence, which might be partially interesting in a much broader context, too. As a more special problem, we also look into translational invariant states close to an invariance-breaking phase transition and show how to avoid convergence into wrong local minima for such systems. Finally, we apply these methods to polar bosons with long-range interactions. We calculate several detailed Devil's staircases with the corresponding phase diagrams and investigate some supersolid properties.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Image fusion using sparse overcomplete feature dictionaries
Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt
2015-10-06
Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Translations on Eastern Europe Scientific Affairs No. 538
1977-03-16
of the Earth With Satellites Described ( Kiril B. Serafimov; SPISANIE NA BULGARSKATA AKADEMIYA NA NAUKITE, No 2, 1976) BULGARIA Earth Station for...Collection of Space Information Opened ( Kiril B. Serafimov; ZEMEDELSKO ZNAME, 27 Jan 77) 8 Laser Used in Communication Equipment (Vladimir...Atanasov; TEKHNICHESKO DELO, 5 Feb 77) 10 Cooperation With USSR in Computer Production (Dimitur Dimitrov ; TEKHNICHESKO DELO, 12 Feb 77) 12
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
ATR architecture for multisensor fusion
NASA Astrophysics Data System (ADS)
Hamilton, Mark K.; Kipp, Teresa A.
1996-06-01
The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.
Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco
2008-09-01
This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
A Palmprint Recognition Algorithm Using Phase-Only Correlation
NASA Astrophysics Data System (ADS)
Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.
Genetic algorithms using SISAL parallel programming language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejada, S.
1994-05-06
Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.
Code of Federal Regulations, 2011 CFR
2011-04-01
... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...
Code of Federal Regulations, 2013 CFR
2013-04-01
... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...
Code of Federal Regulations, 2014 CFR
2014-04-01
... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...
Code of Federal Regulations, 2012 CFR
2012-04-01
... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...
Code of Federal Regulations, 2010 CFR
2010-04-01
... (FARS) will be translated into estimated observed seat belt use rates using an algorithm that relates... 133, June, 1994. B. The algorithm is as follows: u = (−.221794 + √.049193 + .410769F) / .456410 Where... change in the FARS-based observed seat belt use rate (derived from the above algorithm) between the two...
Mugavero, Michael J; May, Margaret; Harris, Ross; Saag, Michael S; Costagliola, Dominique; Egger, Matthias; Phillips, Andrew; Günthard, Huldrych F; Dabis, Francois; Hogg, Robert; de Wolf, Frank; Fatkenheuer, Gerd; Gill, M John; Justice, Amy; D'Arminio Monforte, Antonella; Lampe, Fiona; Miró, Jose M; Staszewski, Schlomo; Sterne, Jonathan A C
2008-11-30
To determine whether differences in short-term virologic failure among commonly used antiretroviral therapy (ART) regimens translate to differences in clinical events in antiretroviral-naïve patients initiating ART. Observational cohort study of patients initiating ART between January 2000 and December 2005. The Antiretroviral Therapy Cohort Collaboration (ART-CC) is a collaboration of 15 HIV cohort studies from Canada, Europe, and the United States. A total of 13 546 antiretroviral-naïve HIV-positive patients initiating ART with efavirenz, nevirapine, lopinavir/ritonavir, nelfinavir, or abacavir as third drugs in combination with a zidovudine and lamivudine nucleoside reverse transcriptase inhibitor backbone. Short-term (24-week) virologic failure (>500 copies/ml) and clinical events within 2 years of ART initiation (incident AIDS-defining event, death, and a composite measure of these two outcomes). Compared with efavirenz as initial third drug, short-term virologic failure was more common with all other third drugs evaluated; nevirapine (adjusted odds ratio = 1.87, 95% confidence interval (CI) = 1.58-2.22), lopinavir/ritonavir (1.32, 95% CI = 1.12-1.57), nelfinavir (3.20, 95% CI = 2.74-3.74), and abacavir (2.13, 95% CI = 1.82-2.50). However, the rate of clinical events within 2 years of ART initiation appeared higher only with nevirapine (adjusted hazard ratio for composite outcome measure 1.27, 95% CI = 1.04-1.56) and abacavir (1.22, 95% CI = 1.00-1.48). Among antiretroviral-naïve patients initiating therapy, between-ART regimen, differences in short-term virologic failure do not necessarily translate to differences in clinical outcomes. Our results should be interpreted with caution because of the possibility of residual confounding by indication.
NASA Astrophysics Data System (ADS)
Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang
2013-11-01
Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Array architectures for iterative algorithms
NASA Technical Reports Server (NTRS)
Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas
1987-01-01
Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.
NASA Astrophysics Data System (ADS)
Nabavi, N.
2018-07-01
The author investigates the monitoring methods for fine adjustment of the previously proposed on-chip architecture for frequency multiplication and translation of harmonics by design. Digital signal processing (DSP) algorithms are utilized to create an optimized microwave photonic integrated circuit functionality toward automated frequency multiplication. The implemented DSP algorithms are formed on discrete Fourier transform and optimization-based algorithms (Greedy and gradient-based algorithms), which are analytically derived and numerically compared based on the accuracy and speed of convergence criteria.
1988-03-31
radar operation and data - collection activities, a large data -analysis effort has been under way in support of automatic wind-shear detection algorithm ...REDUCTION AND ALGORITHM DEVELOPMENT 49 A. General-Purpose Software 49 B. Concurrent Computer Systems 49 C. Sun Workstations 51 D. Radar Data Analysis 52...1. Algorithm Verification 52 2. Other Studies 53 3. Translations 54 4. Outside Distributions 55 E. Mesonet/LLWAS Data Analysis 55 1. 1985 Data 55 2
Discovering new materials and new phenomena with evolutionary algorithms
NASA Astrophysics Data System (ADS)
Oganov, Artem
Thanks to powerful evolutionary algorithms, in particular the USPEX method, it is now possible to predict both the stable compounds and their crystal structures at arbitrary conditions, given just the set of chemical elements. Recent developments include major increases of efficiency and extensions to low-dimensional systems and molecular crystals (which allowed large structures to be handled easily, e.g. Mg(BH4)2 and H2O-H2) and new techniques called evolutionary metadynamics and Mendelevian search. Some of the results that I will discuss include: 1. Theoretical and experimental evidence for a new partially ionic phase of boron, γ-B and an insulating and optically transparent form of sodium. 2. Predicted stability of ``impossible'' chemical compounds that become stable under pressure - e.g. Na3Cl, Na2Cl, Na3Cl2, NaCl3, NaCl7, Mg3O2 and MgO2. 3. Novel surface phases (e.g. boron surface reconstructions). 4. Novel dielectric polymers, and novel permanent magnets confirmed by experiment and ready for applications. 5. Prediction of new ultrahard materials and computational proof that diamond is the hardest possible material.
Learning receptor positions from imperfectly known motions
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1990-01-01
An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.
NASA Astrophysics Data System (ADS)
Weijers, Jan-Willem; Derudder, Veerle; Janssens, Sven; Petré, Frederik; Bourdoux, André
2006-12-01
To assess the performance of forthcoming 4th generation wireless local area networks, the algorithmic functionality is usually modelled using a high-level mathematical software package, for instance, Matlab. In order to validate the modelling assumptions against the real physical world, the high-level functional model needs to be translated into a prototype. A systematic system design methodology proves very valuable, since it avoids, or, at least reduces, numerous design iterations. In this paper, we propose a novel Matlab-to-hardware design flow, which allows to map the algorithmic functionality onto the target prototyping platform in a systematic and reproducible way. The proposed design flow is partly manual and partly tool assisted. It is shown that the proposed design flow allows to use the same testbench throughout the whole design flow and avoids time-consuming and error-prone intermediate translation steps.
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
Prediction of Effective Drug Combinations by an Improved Naïve Bayesian Algorithm.
Bai, Li-Yue; Dai, Hao; Xu, Qin; Junaid, Muhammad; Peng, Shao-Liang; Zhu, Xiaolei; Xiong, Yi; Wei, Dong-Qing
2018-02-05
Drug combinatorial therapy is a promising strategy for combating complex diseases due to its fewer side effects, lower toxicity and better efficacy. However, it is not feasible to determine all the effective drug combinations in the vast space of possible combinations given the increasing number of approved drugs in the market, since the experimental methods for identification of effective drug combinations are both labor- and time-consuming. In this study, we conducted systematic analysis of various types of features to characterize pairs of drugs. These features included information about the targets of the drugs, the pathway in which the target protein of a drug was involved in, side effects of drugs, metabolic enzymes of the drugs, and drug transporters. The latter two features (metabolic enzymes and drug transporters) were related to the metabolism and transportation properties of drugs, which were not analyzed or used in previous studies. Then, we devised a novel improved naïve Bayesian algorithm to construct classification models to predict effective drug combinations by using the individual types of features mentioned above. Our results indicated that the performance of our proposed method was indeed better than the naïve Bayesian algorithm and other conventional classification algorithms such as support vector machine and K-nearest neighbor.
Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian
2017-06-01
There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Landau singularities from the amplituhedron
Dennen, T.; Prlina, I.; Spradlin, M.; ...
2017-06-28
We propose a simple geometric algorithm for determining the complete set of branch points of amplitudes in planar N = 4 super-Yang-Mills theory directly from the amplituhedron, without resorting to any particular representation in terms of local Feynman integrals. This represents a step towards translating integrands directly into integrals. In particular, the algorithm provides information about the symbol alphabets of general amplitudes. We illustrate the algorithm applied to the one- and two-loop MHV amplitudes.
NASA Astrophysics Data System (ADS)
Firdaus; Arkeman, Y.; Buono, A.; Hermadi, I.
2017-01-01
Translating satellite imagery to a useful data for decision making during this time are usually done manually by human. In this research, we are going to translate satellite imagery by using artificial intelligence method specifically using convolutional neural network and genetic algorithm to become a useful data for decision making, especially for precision agriculture and agroindustry. In this research, we are focused on how to made a sustainable land use planning with 3 objectives. The first is maximizing economic factor. Second is minimizing CO2 emission and the last is minimizing land degradation. Results show that by using artificial intelligence method, can produced a good pareto optimum solutions in a short time.
Transcultural Endocrinology: Adapting Type-2 Diabetes Guidelines on a Global Scale.
Nieto-Martínez, Ramfis; González-Rivas, Juan P; Florez, Hermes; Mechanick, Jeffrey I
2016-12-01
Type-2 diabetes (T2D) needs to be prevented and treated effectively to reduce its burden and consequences. White papers, such as evidence-based clinical practice guidelines (CPG) and their more portable versions, clinical practice algorithms and clinical checklists, may improve clinical decision-making and diabetes outcomes. However, CPG are underused and poorly validated. Protocols that translate and implement these CPG are needed. This review presents the global dimension of T2D, details the importance of white papers in the transculturalization process, compares relevant international CPG, analyzes cultural variables, and summarizes translation strategies that can improve care. Specific protocols and algorithmic tools are provided. Copyright © 2016 Elsevier Inc. All rights reserved.
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.
Moore, G W; Berman, J J
1991-01-01
Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions.
Object-oriented controlled-vocabulary translator using TRANSOFT + HyperPAD.
Moore, G. W.; Berman, J. J.
1991-01-01
Automated coding of surgical pathology reports is demonstrated. This public-domain translation software operates on surgical pathology files, extracting diagnoses and assigning codes in a controlled medical vocabulary, such as SNOMED. Context-sensitive translation algorithms are employed, and syntactically correct diagnostic items are produced that are matched with controlled vocabulary. English-language surgical pathology reports, accessioned over one year at the Baltimore Veterans Affairs Medical Center, were translated. With an interface to a larger hospital information system, all natural language pathology reports are automatically rendered as topography and morphology codes. This translator frees the pathologist from the time-intensive task of personally coding each report, and may be used to flag certain diagnostic categories that require specific quality assurance actions. PMID:1807773
SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugimoto, S; Inoue, T; Kurokawa, C
Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less
Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang
2009-04-15
In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.
Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul
2011-12-19
Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.
ATR-FTIR spectroscopy for the determination of Na4EDTA in detergent aqueous solutions.
Suárez, Leticia; García, Roberto; Riera, Francisco A; Diez, María A
2013-10-15
Fourier transform infrared spectroscopy in the attenuated total reflectance mode (ATR-FTIR) combined with partial last square (PLS) algorithms was used to design calibration and prediction models for a wide range of tetrasodium ethylenediaminetetraacetate (Na4EDTA) concentrations (0.1 to 28% w/w) in aqueous solutions. The spectra obtained using air and water as a background medium were tested for the best fit. The PLS models designed afforded a sufficient level of precision and accuracy to allow even very small amounts of Na4EDTA to be determined. A root mean square error of nearly 0.37 for the validation set was obtained. Over a concentration range below 5% w/w, the values estimated from a combination of ATR-FTIR spectroscopy and a PLS algorithm model were similar to those obtained from an HPLC analysis of NaFeEDTA complexes and subsequent detection by UV absorbance. However, the lowest detection limit for Na4EDTA concentrations afforded by this spectroscopic/chemometric method was 0.3% w/w. The PLS model was successfully used as a rapid and simple method to quantify Na4EDTA in aqueous solutions of industrial detergents as an alternative to HPLC-UV analysis which involves time-consuming dilution and complexation processes. © 2013 Elsevier B.V. All rights reserved.
A Novel Feature Selection Technique for Text Classification Using Naïve Bayes.
Dey Sarkar, Subhajit; Goswami, Saptarsi; Agarwal, Aman; Aktar, Javed
2014-01-01
With the proliferation of unstructured data, text classification or text categorization has found many applications in topic classification, sentiment analysis, authorship identification, spam detection, and so on. There are many classification algorithms available. Naïve Bayes remains one of the oldest and most popular classifiers. On one hand, implementation of naïve Bayes is simple and, on the other hand, this also requires fewer amounts of training data. From the literature review, it is found that naïve Bayes performs poorly compared to other classifiers in text classification. As a result, this makes the naïve Bayes classifier unusable in spite of the simplicity and intuitiveness of the model. In this paper, we propose a two-step feature selection method based on firstly a univariate feature selection and then feature clustering, where we use the univariate feature selection method to reduce the search space and then apply clustering to select relatively independent feature sets. We demonstrate the effectiveness of our method by a thorough evaluation and comparison over 13 datasets. The performance improvement thus achieved makes naïve Bayes comparable or superior to other classifiers. The proposed algorithm is shown to outperform other traditional methods like greedy search based wrapper or CFS.
Hydrogen sulfide ameliorates aging-associated changes in the kidney.
Lee, Hak Joo; Feliers, Denis; Barnes, Jeffrey L; Oh, Sae; Choudhury, Goutam Ghosh; Diaz, Vivian; Galvan, Veronica; Strong, Randy; Nelson, James; Salmon, Adam; Kevil, Christopher G; Kasinath, Balakuntalam S
2018-04-01
Aging is associated with replacement of normal kidney parenchyma by fibrosis. Because hydrogen sulfide (H 2 S) ameliorates kidney fibrosis in disease models, we examined its status in the aging kidney. In the first study, we examined kidney cortical H 2 S metabolism and signaling pathways related to synthesis of proteins including matrix proteins in young and old male C57BL/6 mice. In old mice, increase in renal cortical content of matrix protein involved in fibrosis was associated with decreased H 2 S generation and AMPK activity, and activation of insulin receptor (IR)/IRS-2-Akt-mTORC1-mRNA translation signaling axis that can lead to increase in protein synthesis. In the second study, we randomized 18-19 month-old male C57BL/6 mice to receive 30 μmol/L sodium hydrosulfide (NaHS) in drinking water vs. water alone (control) for 5 months. Administration of NaHS increased plasma free sulfide levels. NaHS inhibited the increase in kidney cortical content of matrix proteins involved in fibrosis and ameliorated glomerulosclerosis. NaHS restored AMPK activity and inhibited activation of IR/IRS-2-Akt-mTORC1-mRNA translation axis. NaHS inhibited age-related increase in kidney cortical content of p21, IL-1β, and IL-6, components of the senescence-associated secretory phenotype. NaHS abolished increase in urinary albumin excretion seen in control mice and reduced serum cystatin C levels suggesting improved glomerular clearance function. We conclude that aging-induced changes in the kidney are associated with H 2 S deficiency. Administration of H 2 S ameliorates aging-induced kidney changes probably by inhibiting signaling pathways leading to matrix protein synthesis.
Faleiros, Rogério Oliveira; Furriel, Rosa P M; McNamara, John Campbell
2017-10-01
Palaemonid shrimps exhibit numerous adaptive strategies, both in their life cycles and in biochemical, physiological, morphological and behavioral characteristics that reflect the wide variety of habitats in which they occur, including species that are of particular interest when analyzing adaptive osmoregulatory strategies. The present investigation evaluates the short- (hours) and long-term (days) time courses of responses of two palaemonid shrimps from separate yet overlapping osmotic niches, Palaemon northropi (marine) and Macrobrachium acanthurus (diadromous, fresh water), to differential salinity challenges at distinct levels of structural organization: (i) transcriptional, analyzing quantitative expression of gill mRNAs that encode for subunits of the Na + /K + -ATPase and V(H + )-ATPase ion transporters; (ii) translational, examining the kinetic behavior of gill Na + /K + -ATPase specific activity; and (iii) systemic, accompanying consequent adjustment of hemolymph osmolality. Palaemon northropi is an excellent hyper-hypo-osmoregulator in dilute and concentrated seawater, respectively. Macrobrachium acanthurus is a strong hyper-regulator in fresh water and hypo-regulates hemolymph osmolality and particularly [Cl - ] in brackish water. Hemolymph hyper-regulation in fresh water (Macrobrachium acanthurus) and dilute seawater (Palaemon northropi) is underlain by augmented expression of both the gill Na + /K + -ATPase and V(H + )-ATPase. In contrast, in neither species is hypo-regulation sustained by changes in Na + /K + -ATPase mRNA expression levels, but rather by regulating enzyme specific activity. The integrated time course of Na + /K + - and V(H + )-ATPase expression and Na + /K + -ATPase activity in the gills of these palaemonid shrimps during acclimation to different salinities reveals versatility in their levels of regulation, and in the roles of these ion transporting pumps in sustaining processes of hyper- and hypo-osmotic and chloride regulation. Copyright © 2017 Elsevier Inc. All rights reserved.
Bertrand, Olivier J. N.; Lindemann, Jens P.; Egelhaaf, Martin
2015-01-01
Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects. PMID:26583771
Almeida, Maria de Lourdes de; Peres, Aida Maris; Ferreira, Maria Manuela Frederico; Mantovani, Maria de Fátima
2017-06-05
to perform the translation and cultural adaptation of the document named Marco Regional de Competencias Esenciales en Salud Pública para los Recursos Humanos en Salud de la Región de las Américas (Regional Framework of Core Competencies in Public Health for Health Human Resources in the Region of Americas) from Spanish to Brazilian Portuguese. a methodological study comprising the following phases: authorization for translation; initial translation; synthesis of translations and consensus; back-translation and formation of an expert committee. in the translation of domain names, there was no difference in 66.7% (N = 4); in the translation of domain description and competencies there were divergences in 100% of them (N = 6, N = 56). A consensus of more than 80% was obtained in the translation and improvement in the expert committee by the change of words and expressions for approximation of meanings to the Brazilian context. the translated and adapted document has the potential of application in research, and use in the practice of collective/public health care in Brazil. realizar a tradução e adaptação cultural do Marco Regional de Competencias Esenciales en Salud Pública para los Recursos Humanos en Salud de la Región de las Américas, do espanhol para a língua portuguesa do Brasil. pesquisa metodológica, que seguiu as fases: autorização para tradução; tradução inicial; síntese das traduções e consenso; retrotradução e composição de um comitê de especialistas. na tradução dos nomes dos domínios, não houve diferença em 66,7 % (N=4); na tradução da descrição dos domínios e das competências ocorreram divergências em 100 % destes (N=6, N=56), obteve-se consenso acima de 80% ainda na tradução, e aprimoramento no comitê de especialistas pela alteração de palavras e expressões para aproximar os significados ao contexto brasileiro. o documento traduzido e adaptado possui potencial de aplicação em pesquisas e utilização na prática da atenção à saúde pública/coletiva no Brasil. realizar la traducción y adaptación cultural del Marco Regional de Competencias Esenciales en Salud Pública para los Recursos Humanos en Salud de la Región de las Américas, del español para el idioma portugués de Brasil. investigación metodológica, que siguió las fases: autorización para traducción; traducción inicial; síntesis de las traducciones y consenso; y, retrotraducción y composición de un comité de especialistas. en la traducción de los nombres de los dominios, no hubo diferencia en 66,7 % (N=4); en la traducción de la descripción de los dominios y de las competencias ocurrieron divergencias en 100 % de estos (N=6, N=56), se obtuvo consenso arriba de 80%, durante la traducción y perfeccionamiento, en el comité de especialistas, en la alteración de palabras y expresiones para aproximar los significados al contexto brasileño. el documento traducido y adaptado posee potencial de aplicación en investigaciones y de utilización en la práctica de la atención a la salud pública/colectiva en Brasil.
State-based verification of RTCP-nets with nuXmv
NASA Astrophysics Data System (ADS)
Biernacka, Agnieszka; Biernacki, Jerzy; Szpyrka, Marcin
2015-12-01
The paper deals with an algorithm of translation of RTCP-nets' (real-time coloured Petri nets) coverability graphs into nuXmv state machines. The approach enables users to verify RTCP-nets with model checking techniques provided by the nuXmv tool. Full details of the algorithm are presented and an illustrative example of the approach usefulness is provided.
High pressure structural stability of the Na-Te system
NASA Astrophysics Data System (ADS)
Wang, Youchun; Tian, Fubo; Li, Da; Duan, Defang; Xie, Hui; Liu, Bingbing; Zhou, Qiang; Cui, Tian
2018-03-01
The ab initio evolutionary algorithm is used to search for all thermodynamically stable Na-Te compounds at extreme pressure. In our calculations, several new structures are discovered at high pressure, namely, Imma Na2Te, Pmmm NaTe, Imma Na8Te2 and P4/mmm NaTe3. Like the known structures of Na2Te (Fm-3m, Pnma and P63/mmc), the Pmmm NaTe, Imma Na8Te2 and P4/mmm NaTe3 structures also show semiconductor properties with band-gap decreases when pressure increased. However, we find that the band-gap of Imma Na2Te structure increases with pressure. We presume that the result may be caused by the increasing of splitting between Te p states and Na s, Na p and Te d states. Furthermore, we think that the strong hybridization between Na p state and Te d state result in the band gap increasing with pressure.
NASA Astrophysics Data System (ADS)
Williams, Godfried B.
2005-03-01
This paper attempts to demonstrate a novel based idea for transforming statistical image data to text using autoassociative and unsupervised artificial neural network and iconic image maps using the shape and texture genetic algorithm, underlying concepts translating the image data to text. Full details of experiments could be assessed at http://www.uel.ac.uk/seis/applications/.
NASA Technical Reports Server (NTRS)
Hruska, S. I.; Dalke, A.; Ferguson, J. J.; Lacher, R. C.
1991-01-01
Rule-based expert systems may be structurally and functionally mapped onto a special class of neural networks called expert networks. This mapping lends itself to adaptation of connectionist learning strategies for the expert networks. A parsing algorithm to translate C Language Integrated Production System (CLIPS) rules into a network of interconnected assertion and operation nodes has been developed. The translation of CLIPS rules to an expert network and back again is illustrated. Measures of uncertainty similar to those rules in MYCIN-like systems are introduced into the CLIPS system and techniques for combining and hiring nodes in the network based on rule-firing with these certainty factors in the expert system are presented. Several learning algorithms are under study which automate the process of attaching certainty factors to rules.
Should the parameters of a BCI translation algorithm be continually adapted?
McFarland, Dennis J; Sarnacki, William A; Wolpaw, Jonathan R
2011-07-15
People with or without motor disabilities can learn to control sensorimotor rhythms (SMRs) recorded from the scalp to move a computer cursor in one or more dimensions or can use the P300 event-related potential as a control signal to make discrete selections. Data collected from individuals using an SMR-based or P300-based BCI were evaluated offline to estimate the impact on performance of continually adapting the parameters of the translation algorithm during BCI operation. The performance of the SMR-based BCI was enhanced by adaptive updating of the feature weights or adaptive normalization of the features. In contrast, P300 performance did not benefit from either of these procedures. Copyright © 2011 Elsevier B.V. All rights reserved.
The Dostoevsky Machine in Georgetown: scientific translation in the Cold War.
Gordin, Michael D
2016-04-01
Machine Translation (MT) is now ubiquitous in discussions of translation. The roots of this phenomenon - first publicly unveiled in the so-called 'Georgetown-IBM Experiment' on 9 January 1954 - displayed not only the technological utopianism still associated with dreams of a universal computer translator, but was deeply enmeshed in the political pressures of the Cold War and a dominating conception of scientific writing as both the goal of machine translation as well as its method. Machine translation was created, in part, as a solution to a perceived crisis sparked by the massive expansion of Soviet science. Scientific prose was also perceived as linguistically simpler, and so served as the model for how to turn a language into a series of algorithms. This paper follows the rise of the Georgetown program - the largest single program in the world - from 1954 to the (as it turns out, temporary) collapse of MT in 1964.
Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections
NASA Astrophysics Data System (ADS)
Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon
2016-06-01
This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia
2018-03-01
Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.
Thin NaI(Tl) crystals to enhance the detection sensitivity for molten 241Am sources.
Peura, Pauli; Bélanger-Champagne, Camille; Eerola, Paula; Dendooven, Peter; Huhtalo, Eero
2018-04-26
A thin 5-mm NaI(Tl) scintillator detector was tested with the goal of enhancing the detection efficiency of 241 Am gamma and X rays for steelworks operations. The performance of a thin (5 mm) NaI(Tl) detector was compared with a standard 76.2-mm thick NaI(Tl) detector. The 5-mm thick detector crystal results in a 55% smaller background rate at 60 keV compared with the thicker detector, translating into the ability to detect 30% weaker 241 Am sources. For a 5 mm thick and 76.2 mm diameter NaI detector in the ladle car tunnel at Outokumpu Tornio Works, the minimum activity of a molten 241 Am source that can be detected in 5 s with 95% probability is 9 MBq. Copyright © 2018 Elsevier Ltd. All rights reserved.
Prototype for Meta-Algorithmic, Content-Aware Image Analysis
2015-03-01
PROTOTYPE FOR META-ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...ALGORITHMIC, CONTENT-AWARE IMAGE ANALYSIS 5a. CONTRACT NUMBER FA8750-12-C-0181 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62305E 6. AUTHOR(S) S...approaches were studied in detail and their results on a sample dataset are presented. 15. SUBJECT TERMS Image Analysis , Computer Vision, Content
Sai, Linwei; Tang, Lingli; Zhao, Jijun; Wang, Jun; Kumar, Vijay
2011-11-14
The ground state structures of neutral and anionic clusters of Na(n)Si(m) (1 ≤ n ≤ 3, 1 ≤ m ≤ 11) have been determined using genetic algorithm incorporated in first principles total energy code. The size dependence of the structural and electronic properties is discussed in detail. It is found that the lowest-energy structures of Na(n)Si(m) clusters resemble those of the pure Si clusters. Interestingly, Na atoms in neutral Na(n)Si(m) clusters are usually well separated by the Si(m) skeleton, whereas Na atoms can form Na-Na bonds in some anionic clusters. The ionization potentials, adiabatic electron affinities, and photoelectron spectra are also calculated and the results compare well with the experimental data. © 2011 American Institute of Physics
Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib
2017-10-01
Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO 3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO 3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi
2017-11-02
Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
Hücker, Sarah M.; Ardern, Zachary; Goldberg, Tatyana; Schafferhans, Andrea; Bernhofer, Michael; Vestergaard, Gisle; Nelson, Chase W.; Schloter, Michael; Rost, Burkhard; Scherer, Siegfried
2017-01-01
In the past, short protein-coding genes were often disregarded by genome annotation pipelines. Transcriptome sequencing (RNAseq) signals outside of annotated genes have usually been interpreted to indicate either ncRNA or pervasive transcription. Therefore, in addition to the transcriptome, the translatome (RIBOseq) of the enteric pathogen Escherichia coli O157:H7 strain Sakai was determined at two optimal growth conditions and a severe stress condition combining low temperature and high osmotic pressure. All intergenic open reading frames potentially encoding a protein of ≥ 30 amino acids were investigated with regard to coverage by transcription and translation signals and their translatability expressed by the ribosomal coverage value. This led to discovery of 465 unique, putative novel genes not yet annotated in this E. coli strain, which are evenly distributed over both DNA strands of the genome. For 255 of the novel genes, annotated homologs in other bacteria were found, and a machine-learning algorithm, trained on small protein-coding E. coli genes, predicted that 89% of these translated open reading frames represent bona fide genes. The remaining 210 putative novel genes without annotated homologs were compared to the 255 novel genes with homologs and to 250 short annotated genes of this E. coli strain. All three groups turned out to be similar with respect to their translatability distribution, fractions of differentially regulated genes, secondary structure composition, and the distribution of evolutionary constraint, suggesting that both novel groups represent legitimate genes. However, the machine-learning algorithm only recognized a small fraction of the 210 genes without annotated homologs. It is possible that these genes represent a novel group of genes, which have unusual features dissimilar to the genes of the machine-learning algorithm training set. PMID:28902868
NASA Astrophysics Data System (ADS)
Medgyesimitschang, L. N.; Putnam, J. M.
1982-05-01
A general analytical formulation, based on the method of moments (MM) is described for solving electromagnetic problems associated with off-surface (wire) and aperture radiators on finite-length cylinders of arbitrary cross section, denoted in this report as bodies of translation (BOT). This class of bodies can be used to model structures with noncircular cross sections such as wings, fins and aircraft fuselages.
NASA Astrophysics Data System (ADS)
Ernawati; Carnia, E.; Supriatna, A. K.
2018-03-01
Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
Translational informatics: an industry perspective.
Cantor, Michael N
2012-01-01
Translational informatics (TI) is extremely important for the pharmaceutical industry, especially as the bar for regulatory approval of new medications is set higher and higher. This paper will explore three specific areas in the drug development lifecycle, from tools developed by precompetitive consortia to standardized clinical data collection to the effective delivery of medications using clinical decision support, in which TI has a major role to play. Advancing TI will require investment in new tools and algorithms, as well as ensuring that translational issues are addressed early in the design process of informatics projects, and also given higher weight in funding or publication decisions. Ultimately, the source of translational tools and differences between academia and industry are secondary, as long as they move towards the shared goal of improving health.
MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes.
Zhu, Huaiqiu; Hu, Gang-Qing; Yang, Yi-Fan; Wang, Jin; She, Zhen-Su
2007-03-16
Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs) and Translation Initiation Sites (TISs). The former is based on a linguistic "Entropy Density Profile" (EDP) model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED) algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.
Nascent chain-monitored remodeling of the Sec machinery for salinity adaptation of marine bacteria
Ishii, Eiji; Chiba, Shinobu; Hashimoto, Narimasa; Kojima, Seiji; Homma, Michio; Ito, Koreaki; Akiyama, Yoshinori; Mori, Hiroyuki
2015-01-01
SecDF interacts with the SecYEG translocon in bacteria and enhances protein export in a proton-motive-force-dependent manner. Vibrio alginolyticus, a marine-estuarine bacterium, contains two SecDF paralogs, V.SecDF1 and V.SecDF2. Here, we show that the export-enhancing function of V.SecDF1 requires Na+ instead of H+, whereas V.SecDF2 is Na+-independent, presumably requiring H+. In accord with the cation-preference difference, V.SecDF2 was only expressed under limited Na+ concentrations whereas V.SecDF1 was constitutive. However, it is not the decreased concentration of Na+ per se that the bacterium senses to up-regulate the V.SecDF2 expression, because marked up-regulation of the V.SecDF2 synthesis was observed irrespective of Na+ concentrations under certain genetic/physiological conditions: (i) when the secDF1VA gene was deleted and (ii) whenever the Sec export machinery was inhibited. VemP (Vibrio export monitoring polypeptide), a secretory polypeptide encoded by the upstream ORF of secDF2VA, plays the primary role in this regulation by undergoing regulated translational elongation arrest, which leads to unfolding of the Shine–Dalgarno sequence for translation of secDF2VA. Genetic analysis of V. alginolyticus established that the VemP-mediated regulation of SecDF2 is essential for the survival of this marine bacterium in low-salinity environments. These results reveal that a class of marine bacteria exploits nascent-chain ribosome interactions to optimize their protein export pathways to propagate efficiently under different ionic environments that they face in their life cycles. PMID:26392525
Benndorf, Matthias; Burnside, Elizabeth S; Herda, Christoph; Langer, Mathias; Kotter, Elmar
2015-08-01
Lesions detected at mammography are described with a highly standardized terminology: the breast imaging-reporting and data system (BI-RADS) lexicon. Up to now, no validated semantic computer assisted classification algorithm exists to interactively link combinations of morphological descriptors from the lexicon to a probabilistic risk estimate of malignancy. The authors therefore aim at the external validation of the mammographic mass diagnosis (MMassDx) algorithm. A classification algorithm like MMassDx must perform well in a variety of clinical circumstances and in datasets that were not used to generate the algorithm in order to ultimately become accepted in clinical routine. The MMassDx algorithm uses a naïve Bayes network and calculates post-test probabilities of malignancy based on two distinct sets of variables, (a) BI-RADS descriptors and age ("descriptor model") and (b) BI-RADS descriptors, age, and BI-RADS assessment categories ("inclusive model"). The authors evaluate both the MMassDx (descriptor) and MMassDx (inclusive) models using two large publicly available datasets of mammographic mass lesions: the digital database for screening mammography (DDSM) dataset, which contains two subsets from the same examinations-a medio-lateral oblique (MLO) view and cranio-caudal (CC) view dataset-and the mammographic mass (MM) dataset. The DDSM contains 1220 mass lesions and the MM dataset contains 961 mass lesions. The authors evaluate discriminative performance using area under the receiver-operating-characteristic curve (AUC) and compare this to the BI-RADS assessment categories alone (i.e., the clinical performance) using the DeLong method. The authors also evaluate whether assigned probabilistic risk estimates reflect the lesions' true risk of malignancy using calibration curves. The authors demonstrate that the MMassDx algorithms show good discriminatory performance. AUC for the MMassDx (descriptor) model in the DDSM data is 0.876/0.895 (MLO/CC view) and AUC for the MMassDx (inclusive) model in the DDSM data is 0.891/0.900 (MLO/CC view). AUC for the MMassDx (descriptor) model in the MM data is 0.862 and AUC for the MMassDx (inclusive) model in the MM data is 0.900. In all scenarios, MMassDx performs significantly better than clinical performance, P < 0.05 each. The authors furthermore demonstrate that the MMassDx algorithm systematically underestimates the risk of malignancy in the DDSM and MM datasets, especially when low probabilities of malignancy are assigned. The authors' results reveal that the MMassDx algorithms have good discriminatory performance but less accurate calibration when tested on two independent validation datasets. Improvement in calibration and testing in a prospective clinical population will be important steps in the pursuit of translation of these algorithms to the clinic.
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Zhang, Ye; Qin, Yuliang
2018-01-01
Imaging, classification, and recognition techniques of ballistic targets in midcourse have always been the focus of research in the radar field for military applications. However, the high velocity translation of ballistic targets will subject range profile and Doppler to translation, slope, and fold, which are especially severe in the terahertz region. Therefore, a two-step translation compensation method based on envelope alignment is presented. The rough compensation is based on the traditional envelope alignment algorithm in inverse synthetic aperture radar imaging, and the fine compensation is supported by distance fitting. Then, a wideband imaging radar system with a carrier frequency of 0.32 THz is introduced, and an experiment on a precession missile model is carried out. After translation compensation with the method proposed in this paper, the range profile and the micro-Doppler distributions unaffected by translation are obtained, providing an important foundation for the high-resolution imaging and micro-Doppler extraction of the terahertz radar.
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
Zhen, Hong; Huang, Ming; Zheng, Xi; Feng, Lixing; Jiang, Baohong; Yang, Min; Wu, Wanying; Liu, Xuan; Guo, Dean
2016-01-01
Although the possibility of developing cardiac steroids/cardiac glycosides as novel cancer therapeutic agents has been recognized, the mechanism of their anticancer activity is still not clear enough. Toad venom extract containing bufadienolides, which belong to cardiac steroids, has actually long been used as traditional Chinese medicine in clinic for cancer therapy in China. The cytotoxicity of arenobufagin, a bufadienolide isolated from toad venom, on human cervical carcinoma HeLa cells was checked. And, the protein expression profile of control HeLa cells and HeLa cells treated with arenobufagin for 48 h was analyzed using two-dimensional electrophoresis, respectively. Differently expressed proteins in HeLa cells treated with arenobufagin were identified and the pathways related to these proteins were mapped from KEGG database. Computational molecular docking was performed to verify the binding of arenobufagin and Na, K-ATPase. The effects of arenobufagin on Na, K-ATPase activity and proteasome activity of HeLa cells were checked. The protein-protein interaction network between Na, K-ATPase and proteasome was constructed and the expression of possible intermediate proteins ataxin-1 and translationally-controlled tumor protein in HeLa cells treated with arenobufagin was then checked. Arenobufagin induced apoptosis and G2/M cell cycle arrest in HeLa cells. The cytotoxic effect of arenobufagin was associated with 25 differently expressed proteins including proteasome-related proteins, calcium ion binding-related proteins, oxidative stress-related proteins, metabolism-related enzymes and others. The results of computational molecular docking revealed that arenobufagin was bound in the cavity formed by the transmembrane alpha subunits of Na, K-ATPase, which blocked the pathway of extracellular Na+/K+ cation exchange and inhibited the function of ion exchange. Arenobufagin inhibited the activity of Na, K-ATPase and proteasome, decreased the expression of Na, K-ATPase α1 and α3 subunits and increased the expression of WEE1 in HeLa cells. Antibodies against Na, K-ATPase α1 and α3 subunits alone or combinated with arenobufagin also inhibited the activity of proteasome. Furthermore, the expression of the possible intermediate proteins ataxin-1 and translationally-controlled tumor protein was increased in HeLa cells treated with arenobufagin by flow cytometry analysis, respectively. These results indicated that arenobufagin might directly bind with Na, K-ATPase α1 and α3 subunits and the inhibitive effect of arenobufagin on proteasomal activity of HeLa cells might be related to its binding with Na, K-ATPase. PMID:27428326
Yue, Qingxi; Zhen, Hong; Huang, Ming; Zheng, Xi; Feng, Lixing; Jiang, Baohong; Yang, Min; Wu, Wanying; Liu, Xuan; Guo, Dean
2016-01-01
Although the possibility of developing cardiac steroids/cardiac glycosides as novel cancer therapeutic agents has been recognized, the mechanism of their anticancer activity is still not clear enough. Toad venom extract containing bufadienolides, which belong to cardiac steroids, has actually long been used as traditional Chinese medicine in clinic for cancer therapy in China. The cytotoxicity of arenobufagin, a bufadienolide isolated from toad venom, on human cervical carcinoma HeLa cells was checked. And, the protein expression profile of control HeLa cells and HeLa cells treated with arenobufagin for 48 h was analyzed using two-dimensional electrophoresis, respectively. Differently expressed proteins in HeLa cells treated with arenobufagin were identified and the pathways related to these proteins were mapped from KEGG database. Computational molecular docking was performed to verify the binding of arenobufagin and Na, K-ATPase. The effects of arenobufagin on Na, K-ATPase activity and proteasome activity of HeLa cells were checked. The protein-protein interaction network between Na, K-ATPase and proteasome was constructed and the expression of possible intermediate proteins ataxin-1 and translationally-controlled tumor protein in HeLa cells treated with arenobufagin was then checked. Arenobufagin induced apoptosis and G2/M cell cycle arrest in HeLa cells. The cytotoxic effect of arenobufagin was associated with 25 differently expressed proteins including proteasome-related proteins, calcium ion binding-related proteins, oxidative stress-related proteins, metabolism-related enzymes and others. The results of computational molecular docking revealed that arenobufagin was bound in the cavity formed by the transmembrane alpha subunits of Na, K-ATPase, which blocked the pathway of extracellular Na+/K+ cation exchange and inhibited the function of ion exchange. Arenobufagin inhibited the activity of Na, K-ATPase and proteasome, decreased the expression of Na, K-ATPase α1 and α3 subunits and increased the expression of WEE1 in HeLa cells. Antibodies against Na, K-ATPase α1 and α3 subunits alone or combinated with arenobufagin also inhibited the activity of proteasome. Furthermore, the expression of the possible intermediate proteins ataxin-1 and translationally-controlled tumor protein was increased in HeLa cells treated with arenobufagin by flow cytometry analysis, respectively. These results indicated that arenobufagin might directly bind with Na, K-ATPase α1 and α3 subunits and the inhibitive effect of arenobufagin on proteasomal activity of HeLa cells might be related to its binding with Na, K-ATPase.
2013-01-01
Introduction In mammals, internal Na+ homeostasis is maintained through Na+ reabsorption via a variety of Na+ transport proteins with mutually compensating functions, which are expressed in different segments of the nephrons. In zebrafish, Na+ homeostasis is achieved mainly through the skin/gill ionocytes, namely Na+/H+ exchanger (NHE3b)-expressing H+-ATPase rich (HR) cells and Na+-Cl- cotransporter (NCC)-expressing NCC cells, which are functionally homologous to mammalian proximal and distal convoluted tubular cells, respectively. The present study aimed to investigate whether or not the functions of HR and NCC ionocytes are differentially regulated to compensate for disruptions of internal Na+ homeostasis and if the cell differentiation of the ionocytes is involved in this regulation pathway. Results Translational knockdown of ncc caused an increase in HR cell number and a resulting augmentation of Na+ uptake in zebrafish larvae, while NHE3b loss-of-function caused an increase in NCC cell number with a concomitant recovery of Na+ absorption. Environmental acid stress suppressed nhe3b expression in HR cells and decreased Na+ content, which was followed by up-regulation of NCC cells accompanied by recovery of Na+ content. Moreover, knockdown of ncc resulted in a significant decrease of Na+ content in acid-acclimated zebrafish. Conclusions These results provide evidence that HR and NCC cells exhibit functional redundancy in Na+ absorption, similar to the regulatory mechanisms in mammalian kidney, and suggest this functional redundancy is a critical strategy used by zebrafish to survive in a harsh environment that disturbs body fluid Na+ homeostasis. PMID:23924428
Violation of the zero-force theorem in the time-dependent Krieger-Li-Iafrate approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundt, Michael; Kuemmel, Stephan; Leeuwen, Robert van
2007-05-15
We demonstrate that the time-dependent Krieger-Li-Iafrate approximation in combination with the exchange-only functional violates the zero-force theorem. By analyzing the time-dependent dipole moment of Na{sub 5} and Na{sub 9}{sup +}, we furthermore show that this can lead to an unphysical self-excitation of the system depending on the system properties and the excitation strength. Analytical aspects, especially the connection between the zero-force theorem and the generalized-translation invariance of the potential, are discussed.
Algorithms for Data Intensive Applications on Intelligent and Smart Memories
2003-03-01
editors). Parallel Algorithms and Architectures. North Holland, 1986. [8] P. Diniz . USC ISI, Personal Communication, March, 2001. [9] M. Frigo, C. E ...hierarchy as well as the Translation Lookaside Buer TLB aect the e ectiveness of cache friendly optimizations These penalties vary among...processors and cause large variations in the e ectiveness of cache performance optimizations The area of graph problems is fundamental in a wide variety of
Evaluation of a treatment-based classification algorithm for low back pain: a cross-sectional study.
Stanton, Tasha R; Fritz, Julie M; Hancock, Mark J; Latimer, Jane; Maher, Christopher G; Wand, Benedict M; Parent, Eric C
2011-04-01
Several studies have investigated criteria for classifying patients with low back pain (LBP) into treatment-based subgroups. A comprehensive algorithm was created to translate these criteria into a clinical decision-making guide. This study investigated the translation of the individual subgroup criteria into a comprehensive algorithm by studying the prevalence of patients meeting the criteria for each treatment subgroup and the reliability of the classification. This was a cross-sectional, observational study. Two hundred fifty patients with acute or subacute LBP were recruited from the United States and Australia to participate in the study. Trained physical therapists performed standardized assessments on all participants. The researchers used these findings to classify participants into subgroups. Thirty-one participants were reassessed to determine interrater reliability of the algorithm decision. Based on individual subgroup criteria, 25.2% (95% confidence interval [CI]=19.8%-30.6%) of the participants did not meet the criteria for any subgroup, 49.6% (95% CI=43.4%-55.8%) of the participants met the criteria for only one subgroup, and 25.2% (95% CI=19.8%-30.6%) of the participants met the criteria for more than one subgroup. The most common combination of subgroups was manipulation + specific exercise (68.4% of the participants who met the criteria for 2 subgroups). Reliability of the algorithm decision was moderate (kappa=0.52, 95% CI=0.27-0.77, percentage of agreement=67%). Due to a relatively small patient sample, reliability estimates are somewhat imprecise. These findings provide important clinical data to guide future research and revisions to the algorithm. The finding that 25% of the participants met the criteria for more than one subgroup has important implications for the sequencing of treatments in the algorithm. Likewise, the finding that 25% of the participants did not meet the criteria for any subgroup provides important information regarding potential revisions to the algorithm's bottom table (which guides unclear classifications). Reliability of the algorithm is sufficient for clinical use.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Walimbe, Vivek; Shekhar, Raj
2006-12-01
We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.
McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David
2018-07-01
A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.
Hershberg, Julie A; Rose, Dorian K; Tilson, Julie K; Brutsch, Bettina; Correa, Anita; Gallichio, Joann; McLeod, Molly; Moore, Craig; Wu, Sam; Duncan, Pamela W; Behrman, Andrea L
2017-01-01
Despite efforts to translate knowledge into clinical practice, barriers often arise in adapting the strict protocols of a randomized, controlled trial (RCT) to the individual patient. The Locomotor Experience Applied Post-Stroke (LEAPS) RCT demonstrated equal effectiveness of 2 intervention protocols for walking recovery poststroke; both protocols were more effective than usual care physical therapy. The purpose of this article was to provide knowledge-translation tools to facilitate implementation of the LEAPS RCT protocols into clinical practice. Participants from 2 of the trial's intervention arms: (1) early Locomotor Training Program (LTP) and (2) Home Exercise Program (HEP) were chosen for case presentation. The two cases illustrate how the protocols are used in synergy with individual patient presentations and clinical expertise. Decision algorithms and guidelines for progression represent the interface between implementation of an RCT standardized intervention protocol and clinical decision-making. In each case, the participant presents with a distinct clinical challenge that the therapist addresses by integrating the participant's unique presentation with the therapist's expertise while maintaining fidelity to the LEAPS protocol. Both participants progressed through an increasingly challenging intervention despite their own unique presentation. Decision algorithms and exercise progression for the LTP and HEP protocols facilitate translation of the RCT protocol to the real world of clinical practice. The two case examples to facilitate translation of the LEAPS RCT into clinical practice by enhancing understanding of the protocols, their progression, and their application to individual participants.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A147).
Noise analysis of genome-scale protein synthesis using a discrete computational model of translation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch; Swiss Institute of Bioinformatics
2015-07-28
Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as howmore » mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.« less
Oyana, Tonny J; Achenie, Luke E K; Heo, Joon
2012-01-01
The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.
Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon
2012-01-01
The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977
Translation of one high-level language to another: COBOL to ADA, an example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.A.
1986-01-01
This dissertation discusses the difficulties encountered in, and explores possible solutions to, the task of automatically converting programs written in one HLL, COBOL, into programs written in another HLL, Ada, and still maintain readability. This paper presents at least one set of techniques and algorithms to solve many of the problems that were encountered. The differing view of records is solved by isolating those instances where it is a problem, then using the RENAMES option of Ada. Several solutions to doing the decimal-arithmetic translation are discussed. One method used is to emulate COBOL arithmetic in an arithmetic package. Another partialmore » solution suggested is to convert the values to decimal-scaled integers and use modular arithmetic. Conversion to fixed-point type and floating-point type are the third and fourth methods. The work of another researcher, Bobby Othmer, is utilized to correct any unstructured code, to remap statements not directly translatable such as ALTER, and to pull together isolated code sections. Algorithms are then presented to convert this restructured COBOL code into Ada code with local variables, parameters, and packages. The input/output requirements are partially met by mapping them to a series of procedure calls that interface with Ada's standard input-output package. Several examples are given of hand translations of COBOL programs. In addition, a possibly new method is shown for measuring the readability of programs.« less
Jung, Heejung; Baek, Gahyun; Kim, Jaai; Shin, Seung Gu; Lee, Changsoo
2016-01-01
The effects of mild-temperature thermochemical pretreatments with HCl or NaOH on the solubilization and biomethanation of Ulva biomass were assessed. Within the explored region (0-0.2M HCl/NaOH, 60-90°C), both methods were effective for solubilization (about 2-fold increase in the proportion of soluble organics), particularly under high-temperature and high-chemical-dose conditions. However, increased solubilization was not translated into enhanced biogas production for both methods. Response surface analysis statistically revealed that HCl or NaOH addition enhances the solubilization degree while adversely affects the methanation. The thermal-only treatment at the upper-limit temperature (90°C) was estimated to maximize the biogas production for both methods, suggesting limited potential of HCl/NaOH treatment for enhanced Ulva biomethanation. Compared to HCl, NaOH had much stronger positive and negative effects on the solubilization and methanation, respectively. Methanosaeta was likely the dominant methanogen group in all trials. Bacterial community structure varied among the trials according primarily to HCl/NaOH addition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessment of various supervised learning algorithms using different performance metrics
NASA Astrophysics Data System (ADS)
Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.
2017-11-01
Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiland, W.; Tittes, U.; Hertel, I.V.
Angular distributions for the electronic to vibrational rotational and translational energy (E-VRT) transfer process Na*(3p)+H/sub 2/,D/sub 2/..-->..Na(3s)+H/sub 2/(v',j') with product energy analysis have been measured for the first time. The differential cross sections are forward peaked, constant but small between 35/sup 0/ and 160/sup 0/ and very slightly increasing at 180/sup 0/. The observations can be qualitatively understood by a simple model for the particle motion on the attractive A/sup 2/B/sub 2/ excited-state surface with a hop to the repulsive X/sup 2/A/sub 1/ ground state.
NASA Astrophysics Data System (ADS)
Avetisyan, H.; Bruna, O.; Holub, J.
2016-11-01
A numerous techniques and algorithms are dedicated to extract emotions from input data. In our investigation it was stated that emotion-detection approaches can be classified into 3 following types: Keyword based / lexical-based, learning based, and hybrid. The most commonly used techniques, such as keyword-spotting method, Support Vector Machines, Naïve Bayes Classifier, Hidden Markov Model and hybrid algorithms, have impressive results in this sphere and can reach more than 90% determining accuracy.
Torres, Heloísa de Carvalho; Chaves, Fernanda Figueredo; da Silva, Daniel Dutra Romualdo; Bosco, Adriana Aparecida; Gabriel, Beatriz Diniz; Reis, Ilka Afonso; Rodrigues, Júlia Santos Nunes; Pagano, Adriana Silvina
2016-01-01
ABSTRACT Objective: to translate, adapt and validate the contents of the Diabetes Medical Management Plan for the Brazilian context. This protocol was developed by the American Diabetes Association and guides the procedure of educators for the care of children and adolescents with diabetes in schools. Method: this methodological study was conducted in four stages: initial translation, synthesis of initial translation, back translation and content validation by an expert committee, composed of 94 specialists (29 applied linguists and 65 health professionals), for evaluation of the translated version through an online questionnaire. The concordance level of the judges was calculated based on the Content Validity Index. Data were exported into the R program for statistical analysis: Results: the evaluation of the instrument showed good concordance between the judges of the Health and Applied Linguistics areas, with a mean content validity index of 0.9 and 0.89, respectively, and slight variability of the index between groups (difference of less than 0.01). The items in the translated version, evaluated as unsatisfactory by the judges, were reformulated based on the considerations of the professionals of each group. Conclusion: a Brazilian version of Diabetes Medical Management Plan was constructed, called the Plano de Manejo do Diabetes na Escola. PMID:27508911
A hybrid approach to select features and classify diseases based on medical data
NASA Astrophysics Data System (ADS)
AbdelLatif, Hisham; Luo, Jiawei
2018-03-01
Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms
NASA Astrophysics Data System (ADS)
Ghosh, Karabi
2017-02-01
We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (aTr4 - aT4)cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325
Automatic Data Distribution for CFD Applications on Structured Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Yan, Jerry
2000-01-01
Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.
Automatic Data Distribution for CFD Applications on Structured Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Yan, Jerry
1999-01-01
Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
Hetero-association for pattern translation
NASA Astrophysics Data System (ADS)
Yu, Francis T. S.; Lu, Thomas T.; Yang, Xiangyang
1991-09-01
A hetero-association neural network using an interpattern association algorithm is presented. By using simple logical rules, hetero-association memory can be constructed based on the association between the input-output reference patterns. For optical implementation, a compact size liquid crystal television neural network is used. Translations between the English letters and the Chinese characters as well as Arabic and Chinese numerics are demonstrated. The authors have shown that the hetero-association model can perform more effectively in comparison to the Hopfield model in retrieving large numbers of similar patterns.
Araújo, Ricardo de A
2010-12-01
This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Fiorini, M.; Frezza, O.; Lonardo, A.; Lamanna, G.; Lo Cicero, F.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Tosoratto, L.; Vicini, P.
2016-03-01
A GPU-based low level (L0) trigger is currently integrated in the experimental setup of the RICH detector of the NA62 experiment to assess the feasibility of building more refined physics-related trigger primitives and thus improve the trigger discriminating power. To ensure the real-time operation of the system, a dedicated data transport mechanism has been implemented: an FPGA-based Network Interface Card (NaNet-10) receives data from detectors and forwards them with low, predictable latency to the memory of the GPU performing the trigger algorithms. Results of the ring-shaped hit patterns reconstruction will be reported and discussed.
Wavefront sensing with a thin diffuser
NASA Astrophysics Data System (ADS)
Berto, Pascal; Rigneault, Hervé; Guillon, Marc
2017-12-01
We propose and implement a broadband, compact, and low-cost wavefront sensing scheme by simply placing a thin diffuser in the close vicinity of a camera. The local wavefront gradient is determined from the local translation of the speckle pattern. The translation vector map is computed thanks to a fast diffeomorphic image registration algorithm and integrated to reconstruct the wavefront profile. The simple translation of speckle grains under local wavefront tip/tilt is ensured by the so-called "memory effect" of the diffuser. Quantitative wavefront measurements are experimentally demonstrated both for the few first Zernike polynomials and for phase-imaging applications requiring high resolution. We finally provided a theoretical description of the resolution limit that is supported experimentally.
Chen, Wen; Chen, Xudong; Sheppard, Colin J R
2011-10-10
In this paper, we propose a method using structured-illumination-based diffractive imaging with a laterally-translated phase grating for optical double-image cryptography. An optical cryptosystem is designed, and multiple random phase-only masks are placed in the optical path. When a phase grating is laterally translated just before the plaintexts, several diffraction intensity patterns (i.e., ciphertexts) can be correspondingly obtained. During image decryption, an iterative retrieval algorithm is developed to extract plaintexts from the ciphertexts. In addition, security and advantages of the proposed method are analyzed. Feasibility and effectiveness of the proposed method are demonstrated by numerical simulation results. © 2011 Optical Society of America
Ostrem, James A.; Olson, Steve W.; Schmitt, Jürgen M.; Bohnert, Hans J.
1987-01-01
Mesembryanthemum crystallinum responds to salt stress by switching from C3 photosynthesis to Crassulacean acid metabolism (CAM). During this transition the activity of phosphoenolpyruvate carboxylase (PEPCase) increases in soluble protein extracts from leaf tissue. We monitored CAM induction in plants irrigated with 0.5 molar NaCl for 5 days during the fourth, fifth, and sixth week after germination. Our results indicate that the age of the plant influenced the response to salt stress. There was no increase in PEPCase protein or PEPCase enzyme activity when plants were irrigated with 0.5 molar NaCl during the fourth and fifth week after germination. However, PEPCase activity increased within 2 to 3 days when plants were salt stressed during the sixth week after germination. Immunoblot analysis with anti-PEPCase antibodies showed that PEPCase synthesis was induced in both expanded leaves and in newly developing axillary shoot tissue. The increase in PEPCase protein was paralleled by an increase in PEPCase mRNA as assayed by immunoprecipitation of PEPCase from the in vitro translation products of RNA from salt-stressed plants. These results demonstrate that salinity increased the level of PEPCase in leaf and shoot tissue via a stress-induced increase in the steady-state level of translatable mRNA for this enzyme. Images Fig. 2 Fig. 3 Fig. 4 PMID:16665596
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Na, Dokyun; Lee, Doheon
2010-10-15
RBSDesigner predicts the translation efficiency of existing mRNA sequences and designs synthetic ribosome binding sites (RBSs) for a given coding sequence (CDS) to yield a desired level of protein expression. The program implements the mathematical model for translation initiation described in Na et al. (Mathematical modeling of translation initiation for the estimation of its efficiency to computationally design mRNA sequences with a desired expression level in prokaryotes. BMC Syst. Biol., 4, 71). The program additionally incorporates the effect on translation efficiency of the spacer length between a Shine-Dalgarno (SD) sequence and an AUG codon, which is crucial for the incorporation of fMet-tRNA into the ribosome. RBSDesigner provides a graphical user interface (GUI) for the convenient design of synthetic RBSs. RBSDesigner is written in Python and Microsoft Visual Basic 6.0 and is publicly available as precompiled stand-alone software on the web (http://rbs.kaist.ac.kr). dhlee@kaist.ac.kr
O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...
1995-01-01
Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Crossed beam (E--VRT) energy transfer experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertel, I.V.; Hofmann, H.; Rost, K.A.
A molecular crossed beam apparatus which has been developed to perform electronic-to-vibrational, rotational, translational (E--V,R,T) energy transfer studies is described. Its capabilities are illustrated on the basis of a number of energy transfer spectra obtained for collision systems of the type Na*+Mol(..nu..,j) ..-->..Na+Mol (..nu..',j') where Na* represents a laser excited sodium atom and Mol a diatomic or polyatomic molecule. Because of the lack of reliable dynamic theories on quenching processes, statistical approaches such as the ''linearly forced harmonic oscillator'' and ''prior distributions'' have been used to model the experimental spectra. The agreement is found to be satisfactory, so even suchmore » simple statistics may be useful to describe (E--V,R,T) energy transfer processes in collision systems with small molecules.« less
NASA Astrophysics Data System (ADS)
Licciardi, A.; Piana Agostinetti, N.
2016-06-01
Information about seismic anisotropy is embedded in the variation of the amplitude of the Ps pulses as a function of the azimuth, on both the Radial and the Transverse components of teleseismic receiver functions (RF). We develop a semi-automatic method to constrain the presence and the depth of anisotropic layers beneath a single seismic broad-band station. An algorithm is specifically designed to avoid trial and error methods and subjective crustal parametrizations in RF inversions, providing a suitable tool for large-size data set analysis. The algorithm couples together information extracted from a 1-D VS profile and from a harmonic decomposition analysis of the RF data set. This information is used to determine the number of anisotropic layers and their approximate position at depth, which, in turn, can be used to, for example, narrow the search boundaries for layer thickness and S-wave velocity in a subsequent parameter space search. Here, the output of the algorithm is used to invert an RF data set by means of the Neighbourhood Algorithm (NA). To test our methodology, we apply the algorithm to both synthetic and observed data. We make use of synthetic RF with correlated Gaussian noise to investigate the resolution power for multiple and thin (1-3 km) anisotropic layers in the crust. The algorithm successfully identifies the number and position of anisotropic layers at depth prior the NA inversion step. In the NA inversion, strength of anisotropy and orientation of the symmetry axis are correctly retrieved. Then, the method is applied to field measurement from station BUDO in the Tibetan Plateau. Two consecutive layers of anisotropy are automatically identified with our method in the first 25-30 km of the crust. The data are then inverted with the retrieved parametrization. The direction of the anisotropic axis in the uppermost layer correlates well with the orientation of the major planar structure in the area. The deeper anisotropic layer is associated with an older phase of crustal deformation. Our results are compared with previous anisotropic RF studies at the same station, showing strong similarities.
Image processing via VLSI: A concept paper
NASA Technical Reports Server (NTRS)
Nathan, R.
1982-01-01
Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.
Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods
NASA Technical Reports Server (NTRS)
Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon
2010-01-01
A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.
Classical simulation of infinite-size quantum lattice systems in two spatial dimensions.
Jordan, J; Orús, R; Vidal, G; Verstraete, F; Cirac, J I
2008-12-19
We present an algorithm to simulate two-dimensional quantum lattice systems in the thermodynamic limit. Our approach builds on the projected entangled-pair state algorithm for finite lattice systems [F. Verstraete and J. I. Cirac, arxiv:cond-mat/0407066] and the infinite time-evolving block decimation algorithm for infinite one-dimensional lattice systems [G. Vidal, Phys. Rev. Lett. 98, 070201 (2007)10.1103/PhysRevLett.98.070201]. The present algorithm allows for the computation of the ground state and the simulation of time evolution in infinite two-dimensional systems that are invariant under translations. We demonstrate its performance by obtaining the ground state of the quantum Ising model and analyzing its second order quantum phase transition.
A Novel Handwritten Letter Recognizer Using Enhanced Evolutionary Neural Network
NASA Astrophysics Data System (ADS)
Mahmoudi, Fariborz; Mirzashaeri, Mohsen; Shahamatnia, Ehsan; Faridnia, Saed
This paper introduces a novel design for handwritten letter recognition by employing a hybrid back-propagation neural network with an enhanced evolutionary algorithm. Feeding the neural network consists of a new approach which is invariant to translation, rotation, and scaling of input letters. Evolutionary algorithm is used for the global search of the search space and the back-propagation algorithm is used for the local search. The results have been computed by implementing this approach for recognizing 26 English capital letters in the handwritings of different people. The computational results show that the neural network reaches very satisfying results with relatively scarce input data and a promising performance improvement in convergence of the hybrid evolutionary back-propagation algorithms is exhibited.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.
Mono-isotope Prediction for Mass Spectra Using Bayes Network.
Li, Hui; Liu, Chunmei; Rwebangira, Mugizi Robert; Burge, Legand
2014-12-01
Mass spectrometry is one of the widely utilized important methods to study protein functions and components. The challenge of mono-isotope pattern recognition from large scale protein mass spectral data needs computational algorithms and tools to speed up the analysis and improve the analytic results. We utilized naïve Bayes network as the classifier with the assumption that the selected features are independent to predict mono-isotope pattern from mass spectrometry. Mono-isotopes detected from validated theoretical spectra were used as prior information in the Bayes method. Three main features extracted from the dataset were employed as independent variables in our model. The application of the proposed algorithm to publicMo dataset demonstrates that our naïve Bayes classifier is advantageous over existing methods in both accuracy and sensitivity.
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
System theory in industrial patient monitoring: an overview.
Baura, G D
2004-01-01
Patient monitoring refers to the continuous observation of repeating events of physiologic function to guide therapy or to monitor the effectiveness of interventions, and is used primarily in the intensive care unit and operating room. Commonly processed signals are the electrocardiogram, intraarterial blood pressure, arterial saturation of oxygen, and cardiac output. To this day, the majority of physiologic waveform processing in patient monitors is conducted using heuristic curve fitting. However in the early 1990s, a few enterprising engineers and physicians began using system theory to improve their core processing. Applications included improvement of signal-to-noise ratio, either due to low signal levels or motion artifact, and improvement in feature detection. The goal of this mini-symposium is to review the early work in this emerging field, which has led to technologic breakthroughs. In this overview talk, the process of system theory algorithm research and development is discussed. Research for industrial monitors involves substantial data collection, with some data used for algorithm training and the remainder used for validation. Once the algorithms are validated, they are translated into detailed specifications. Development then translates these specifications into DSP code. The DSP code is verified and validated per the Good Manufacturing Practices mandated by FDA.
Niu, X; Zhu, J K; Narasimhan, M L; Bressan, R A; Hasegawa, P M
1993-01-01
An Atriplex nummularia L. cDNA probe encoding the partial sequence of an isoform of the plasma-membrane H(+)-ATPase was isolated, and used to characterize the NaCl regulation of mRNA accumulation in cultured cells of this halophyte. The peptide (477 amino acids) translated from the open reading frame has the highest sequence homology to the Nicotiana plumbaginifolia plasma-membrane H(+)-ATPase isoform pma4 (greater than 80% identity) and detected a transcript of approximately 3.7 kb on Northern blots of both total and poly(A)+ RNA. The mRNA levels were comparable in unadapted cells, adapted cells (cells adapted to and growing in 342 mM NaCl) and deadapted cells (cells previously adapted to 342 mM NaCl that are now growing without salt). Increased mRNA abundance was detected in deadapted cells within 24 h after exposure to NaCl but not in unadapted cells with similar salt treatments. The NaCl up-regulation of message abundance in deadapted cells was subject to developmental control. Analogous to those reported for glycophytes, the plasma-membrane H(+)-ATPase are encoded by a multigene family in the halophyte.
Programming and Tuning a Quantum Annealing Device to Solve Real World Problems
NASA Astrophysics Data System (ADS)
Perdomo-Ortiz, Alejandro; O'Gorman, Bryan; Fluegemann, Joseph; Smelyanskiy, Vadim
2015-03-01
Solving real-world applications with quantum algorithms requires overcoming several challenges, ranging from translating the computational problem at hand to the quantum-machine language to tuning parameters of the quantum algorithm that have a significant impact on the performance of the device. In this talk, we discuss these challenges, strategies developed to enhance performance, and also a more efficient implementation of several applications. Although we will focus on applications of interest to NASA's Quantum Artificial Intelligence Laboratory, the methods and concepts presented here apply to a broader family of hard discrete optimization problems, including those that occur in many machine-learning algorithms.
Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry
2009-11-14
The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.
A method for digital image registration using a mathematical programming technique
NASA Technical Reports Server (NTRS)
Yao, S. S.
1973-01-01
A new algorithm based on a nonlinear programming technique to correct the geometrical distortions of one digital image with respect to another is discussed. This algorithm promises to be superior to existing ones in that it is capable of treating localized differential scaling, translational and rotational errors over the whole image plane. A series of piece-wise 'rubber-sheet' approximations are used, constrained in such a manner that a smooth approximation over the entire image can be obtained. The theoretical derivation is included. The result of using the algorithm to register four channel S065 Apollo IX digitized photography over Imperial Valley, California, is discussed in detail.
NASA Astrophysics Data System (ADS)
Hering, Julian; Waller, Erik H.; von Freymann, Georg
2017-02-01
Since a large number of optical systems and devices are based on differently shaped focal intensity distributions (point-spread-functions, PSF), the PSF's quality is crucial for the application's performance. E.g., optical tweezers, optical potentials for trapping of ultracold atoms as well as stimulated-emission-depletion (STED) based microscopy and lithography rely on precisely controlled intensity distributions. However, especially in high numerical aperture (NA) systems, such complex laser modes are easily distorted by aberrations leading to performance losses. Although different approaches addressing phase retrieval algorithms have been recently presented[1-3], fast and automated aberration compensation for a broad variety of complex shaped PSFs in high NA systems is still missing. Here, we report on a Gerchberg-Saxton[4] based algorithm (GSA) for automated aberration correction of arbitrary PSFs, especially for high NA systems. Deviations between the desired target intensity distribution and the three-dimensionally (3D) scanned experimental focal intensity distribution are used to calculate a correction phase pattern. The target phase distribution plus the correction pattern are displayed on a phase-only spatial-light-modulator (SLM). Focused by a high NA objective, experimental 3D scans of several intensity distributions allow for characterization of the algorithms performance: aberrations are reliably identified and compensated within less than 10 iterations. References 1. B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase-retrieved pupil functions in wide-field fluorescence microscopy," J. of Microscopy 216(1), 32-48 (2004). 2. A. Jesacher, A. Schwaighofer, S. Frhapter, C. Maurer, S. Bernet, and M. Ritsch-Marte, "Wavefront correction of spatial light modulators using an optical vortex image," Opt. Express 15(9), 5801-5808 (2007). 3. A. Jesacher and M. J. Booth, "Parallel direct laser writing in three dimensions with spatially dependent aberration correction," Opt. Express 18(20), 21090-21099 (2010). 4. R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of the phase from image and diffraction plane pictures," Optik 35(2), 237-246 (1972).
Superquantile Regression: Theory, Algorithms, and Applications
2014-12-01
Example C: Stack loss data scatterplot matrix. 91 Regression α c0 caf cwt cac R̄ 2 α R̄ 2 α,Adj Least Squares NA -39.9197 0.7156 1.2953 -0.1521 0.9136...This is due to a small 92 Model Regression α c0 cwt cwt2 R̄ 2 α R̄ 2 α,Adj f2 Least Squares NA -41.9109 2.8174 — 0.7665 0.7542 Quantile 0.25 -32.0000
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
Restoration algorithms for imaging through atmospheric turbulence
2017-02-18
the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors
Cone-beam reconstruction for the two-circles-plus-one-line trajectory
NASA Astrophysics Data System (ADS)
Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming
2012-05-01
The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.
A computational procedure for large rotational motions in multibody dynamics
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
A computational procedure suitable for the solution of equations of motion for multibody systems is presented. The present procedure adopts a differential partitioning of the translational motions and the rotational motions. The translational equations of motion are then treated by either a conventional explicit or an implicit direct integration method. A principle feature of this procedure is a nonlinearly implicit algorithm for updating rotations via the Euler four-parameter representation. This procedure is applied to the rolling of a sphere through a specific trajectory, which shows that it yields robust solutions.
A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.
Kothari, Cartik R; Payne, Philip R O
2015-01-01
In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.
Electrophoretic Deformation of Individual Transfer RNA Molecules Reveals Their Identity.
Henley, Robert Y; Ashcroft, Brian Alan; Farrell, Ian; Cooperman, Barry S; Lindsay, Stuart M; Wanunu, Meni
2016-01-13
It has been hypothesized that the ribosome gains additional fidelity during protein translation by probing structural differences in tRNA species. We measure the translocation kinetics of different tRNA species through ∼3 nm diameter synthetic nanopores. Each tRNA species varies in the time scale with which it is deformed from equilibrium, as in the translocation step of protein translation. Using machine-learning algorithms, we can differentiate among five tRNA species, analyze the ratios of tRNA binary mixtures, and distinguish tRNA isoacceptors.
An Accelerated Recursive Doubling Algorithm for Block Tridiagonal Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K
2014-01-01
Block tridiagonal systems of linear equations arise in a wide variety of scientific and engineering applications. Recursive doubling algorithm is a well-known prefix computation-based numerical algorithm that requires O(M^3(N/P + log P)) work to compute the solution of a block tridiagonal system with N block rows and block size M on P processors. In real-world applications, solutions of tridiagonal systems are most often sought with multiple, often hundreds and thousands, of different right hand sides but with the same tridiagonal matrix. Here, we show that a recursive doubling algorithm is sub-optimal when computing solutions of block tridiagonal systems with multiplemore » right hand sides and present a novel algorithm, called the accelerated recursive doubling algorithm, that delivers O(R) improvement when solving block tridiagonal systems with R distinct right hand sides. Since R is typically about 100 1000, this improvement translates to very significant speedups in practice. Detailed complexity analyses of the new algorithm with empirical confirmation of runtime improvements are presented. To the best of our knowledge, this algorithm has not been reported before in the literature.« less
Pose estimation for augmented reality applications using genetic algorithm.
Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen
2005-12-01
This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.
Variations of water's local-structure induced by solvation of NaCl
NASA Astrophysics Data System (ADS)
Gu, Bin; Zhang, Feng-Shou; Huang, Yu-Gai; Fang, Xia
2010-03-01
The researches on the structure of water and its changes induced by solutes are of enduring interests. The changes of the local structure of liquid water induced by NaCl solute under ambient conditions are studied and presented quantitatively with some order parameters and visualized with 2-body and 3-body correlation functions. The results show that, after the NaCl are solvated, the translational order t of water is decreased for the suppression of the second hydration shells around H2O molecules; the tetrahedral order (q) of water is also decreased and its favorite distribution peak moves from 0.76 to 0.5. In addition, the orientational freedom k and the diffusion coefficient D of water molecules are reduced because of new formed hydrogen-bonding structures between water and solvated ions.
Barbosa, Rommel Melgaço; Nacano, Letícia Ramos; Freitas, Rodolfo; Batista, Bruno Lemos; Barbosa, Fernando
2014-09-01
This article aims to evaluate 2 machine learning algorithms, decision trees and naïve Bayes (NB), for egg classification (free-range eggs compared with battery eggs). The database used for the study consisted of 15 chemical elements (As, Ba, Cd, Co, Cs, Cu, Fe, Mg, Mn, Mo, Pb, Se, Sr, V, and Zn) determined in 52 eggs samples (20 free-range and 32 battery eggs) by inductively coupled plasma mass spectrometry. Our results demonstrated that decision trees and NB associated with the mineral contents of eggs provide a high level of accuracy (above 80% and 90%, respectively) for classification between free-range and battery eggs and can be used as an alternative method for adulteration evaluation. © 2014 Institute of Food Technologists®
Geography and the Properties of Surfaces. The Sandwich Theorem - A Basic One for Geography.
the nature of the Sandwich Theorem and its relationship to Geography and provides an algorithm and a complete program to achieve ’solutions.’ Also included is a translation of one work of Hugo Steinhaus . (Author)
Validation of energy-weighted algorithm for radiation portal monitor using plastic scintillator.
Lee, Hyun Cheol; Shin, Wook-Geun; Park, Hyo Jun; Yoo, Do Hyun; Choi, Chang-Il; Park, Chang-Su; Kim, Hong-Suk; Min, Chul Hee
2016-01-01
To prevent illicit tracking of radionuclides, radiation portal monitor (RPM) systems employing plastic scintillators have been used in ports and airports. However, their poor energy resolution makes the discrimination of radioactive material inaccurate. In this study, an energy weight algorithm was validated to determine (133)Ba, (22)Na, (137)Cs, and (60)Co by using a plastic scintillator. The Compton edges of energy spectra were converted to peaks based on the algorithm. The peaks have a maximum error of 6% towards the theoretical Compton edge. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wee, Leonard; Hackett, Sara Lyons; Jones, Andrew; Lim, Tee Sin; Harper, Christopher Stirling
2013-01-01
This study evaluated the agreement of fiducial marker localization between two modalities — an electronic portal imaging device (EPID) and cone‐beam computed tomography (CBCT) — using a low‐dose, half‐rotation scanning protocol. Twenty‐five prostate cancer patients with implanted fiducial markers were enrolled. Before each daily treatment, EPID and half‐rotation CBCT images were acquired. Translational shifts were computed for each modality and two marker‐matching algorithms, seed‐chamfer and grey‐value, were performed for each set of CBCT images. The localization offsets, and systematic and random errors from both modalities were computed. Localization performances for both modalities were compared using Bland‐Altman limits of agreement (LoA) analysis, Deming regression analysis, and Cohen's kappa inter‐rater analysis. The differences in the systematic and random errors between the modalities were within 0.2 mm in all directions. The LoA analysis revealed a 95% agreement limit of the modalities of 2 to 3.5 mm in any given translational direction. Deming regression analysis demonstrated that constant biases existed in the shifts computed by the modalities in the superior–inferior (SI) direction, but no significant proportional biases were identified in any direction. Cohen's kappa analysis showed good agreement between the modalities in prescribing translational corrections of the couch at 3 and 5 mm action levels. Images obtained from EPID and half‐rotation CBCT showed acceptable agreement for registration of fiducial markers. The seed‐chamfer algorithm for tracking of fiducial markers in CBCT datasets yielded better agreement than the grey‐value matching algorithm with EPID‐based registration. PACS numbers: 87.55.km, 87.55.Qr PMID:23835391
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
Papež, Václav; Denaxas, Spiros; Hemingway, Harry
2017-01-01
Electronic Health Records are electronic data generated during or as a byproduct of routine patient care. Structured, semi-structured and unstructured EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the development of precision medicine approaches at scale. A main EHR use-case is defining phenotyping algorithms that identify disease status, onset and severity. Phenotyping algorithms utilize diagnoses, prescriptions, laboratory tests, symptoms and other elements in order to identify patients with or without a specific trait. No common standardized, structured, computable format exists for storing phenotyping algorithms. The majority of algorithms are stored as human-readable descriptive text documents making their translation to code challenging due to their inherent complexity and hinders their sharing and re-use across the community. In this paper, we evaluate the two key Semantic Web Technologies, the Web Ontology Language and the Resource Description Framework, for enabling computable representations of EHR-driven phenotyping algorithms.
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Peissig, Peggy L; Denny, Joshua C; Kho, Abel N; Miller, Aaron; Pathak, Jyotishman
2012-01-01
The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria.
Translational medicine: science or wishful thinking?
Wehling, Martin
2008-01-01
"Translational medicine" as a fashionable term is being increasingly used to describe the wish of biomedical researchers to ultimately help patients. Despite increased efforts and investments into R&D, the output of novel medicines has been declining dramatically over the past years. Improvement of translation is thought to become a remedy as one of the reasons for this widening gap between input and output is the difficult transition between preclinical ("basic") and clinical stages in the R&D process. Animal experiments, test tube analyses and early human trials do simply not reflect the patient situation well enough to reliably predict efficacy and safety of a novel compound or device. This goal, however, can only be achieved if the translational processes are scientifically backed up by robust methods some of which still need to be developed. This mainly relates to biomarker development and predictivity assessment, biostatistical methods, smart and accelerated early human study designs and decision algorithms among other features. It is therefore claimed that a new science needs to be developed called 'translational science in medicine'. PMID:18559092
CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.
Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R
2011-05-01
NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.
Iris recognition using image moments and k-means algorithm.
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.
Iris Recognition Using Image Moments and k-Means Algorithm
Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed
2014-01-01
This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221
Doubling down on peptide phosphorylation as a variable mass modification
USDA-ARS?s Scientific Manuscript database
Some mass spectrometrists believe that searching for variable post-translational modifications like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false positive rates. The basis for this is the premise that the al...
A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.
Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena
2011-01-01
Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.
Synthetic aperture tomographic phase microscopy for 3D imaging of live cells in translational motion
Lue, Niyom; Choi, Wonshik; Popescu, Gabriel; Badizadegan, Kamran; Dasari, Ramachandra R.; Feld, Michael S.
2009-01-01
We present a technique for 3D imaging of live cells in translational motion without need of axial scanning of objective lens. A set of transmitted electric field images of cells at successive points of transverse translation is taken with a focused beam illumination. Based on Hyugens’ principle, angular plane waves are synthesized from E-field images of a focused beam. For a set of synthesized angular plane waves, we apply a filtered back-projection algorithm and obtain 3D maps of refractive index of live cells. This technique, which we refer to as synthetic aperture tomographic phase microscopy, can potentially be combined with flow cytometry or microfluidic devices, and will enable high throughput acquisition of quantitative refractive index data from large numbers of cells. PMID:18825263
A method to track rotational motion for use in single-molecule biophysics.
Lipfert, Jan; Kerssemakers, Jacob J W; Rojer, Maylon; Dekker, Nynke H
2011-10-01
The double helical nature of DNA links many cellular processes such as DNA replication, transcription, and repair to rotational motion and the accumulation of torsional strain. Magnetic tweezers (MTs) are a single-molecule technique that enables the application of precisely calibrated stretching forces to nucleic acid tethers and to control their rotational motion. However, conventional magnetic tweezers do not directly monitor rotation or measure torque. Here, we describe a method to directly measure rotational motion of particles in MT. The method relies on attaching small, non-magnetic beads to the magnetic beads to act as fiducial markers for rotational tracking. CCD images of the beads are analyzed with a tracking algorithm specifically designed to minimize crosstalk between translational and rotational motion: first, the in-plane center position of the magnetic bead is determined with a kernel-based tracker, while subsequently the height and rotation angle of the bead are determined via correlation-based algorithms. Evaluation of the tracking algorithm using both simulated images and recorded images of surface-immobilized beads demonstrates a rotational resolution of 0.1°, while maintaining a translational resolution of 1-2 nm. Example traces of the rotational fluctuations exhibited by DNA-tethered beads confined in magnetic potentials of varying stiffness demonstrate the robustness of the method and the potential for simultaneous tracking of multiple beads. Our rotation tracking algorithm enables the extension of MTs to magnetic torque tweezers (MTT) to directly measure the torque in single molecules. In addition, we envision uses of the algorithm in a range of biophysical measurements, including further extensions of MT, tethered particle motion, and optical trapping measurements.
Multiscale registration algorithm for alignment of meshes
NASA Astrophysics Data System (ADS)
Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.
2004-03-01
Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.
Spectral mapping tools from the earth sciences applied to spectral microscopy data.
Harris, A Thomas
2006-08-01
Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.
Li, Chun; Sun, Jinwei; Qi, Xiaoxi; Liu, Libo
2015-01-01
The viability of Lactobacillus bulgaricus in freeze-drying is of significant commercial interest to dairy industries. In the study, L.bulgaricus demonstrated a significantly improved (p < 0.05) survival rate during freeze-drying when subjected to a pre-stressed period under the conditions of 2% (w/v) NaCl for 2 h in the late growth phase. The main energy source for the life activity of lactic acid bacteria is related to the glycolytic pathway. To investigate the phenomenon of this stress-related viability improvement in L. bulgaricus, the activities and corresponding genes of key enzymes in glycolysis during 2% NaCl stress were studied. NaCl stress significantly enhanced (p < 0.05) glucose utilization. The activities of glycolytic enzymes (phosphofructokinase, pyruvate kinase, and lactate dehydrogenase) decreased during freeze-drying, and NaCl stress were found to improve activities of these enzymes before and after freeze-drying. However, a transcriptional analysis of the corresponding genes suggested that the effect of NaCl stress on the expression of the pfk2 gene was not obvious. The increased survival of freeze-dried cells of L. bulgaricus under NaCl stress might be due to changes in only the activity or translation level of these enzymes in different environmental conditions but have no relation to their mRNA transcription level.
Mijatovic, Tatjana; Kiss, Robert
2013-03-01
Many cancer patients fail to respond to chemotherapy because of the intrinsic resistance of their cancer to pro-apoptotic stimuli or the acquisition of the multidrug resistant phenotype during chronic treatment. Previous data from our groups and from others point to the sodium/potassium pump (the Na+/K+-ATPase, i.e., NaK) with its highly specific ligands (i.e., cardiotonic steroids) as a new target for combating cancers associated with dismal prognoses, including gliomas, melanomas, non-small cell lung cancers, renal cell carcinomas, and colon cancers. Cardiotonic steroid-mediated Na+/K+-ATPase targeting could circumvent various resistance pathways. The most probable pathways include the involvement of Na+/K+-ATPase β subunits in invasion features and Na+/K+-ATPase α subunits in chemosensitisation by specific cardiotonic steroid-mediated apoptosis and anoïkis-sensitisation; the regulation of the expression of multidrug resistant-related genes; post-translational regulation, including glycosylation and ubiquitinylation of multidrug resistant-related proteins; c-Myc downregulation; hypoxia-inducible factor downregulation; NF-κB downregulation and deactivation; the inhibition of the glycolytic pathway with a reduction of intra-cellular ATP levels and an induction of non-apoptotic cell death. The aims of this review are to examine the various molecular pathways by which the NaK targeting can be more deleterious to biologically aggressive cancer cells than to normal cells. Georg Thieme Verlag KG Stuttgart · New York.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less
O'Doherty, Jim; Henricson, Joakim; Falk, Magnus; Anderson, Chris D
2013-11-01
In tissue viability imaging (TiVi), an assessment method for skin erythema, correct orientation of skin position from provocation to assessment optimizes data interpretation. Image processing algorithms could compensate for the effects of skin translation, torsion and rotation realigning assessment images to the position of the skin at provocation. A reference image of a divergent, UVB phototest was acquired, as well as test images at varying levels of translation, rotation and torsion. Using 12 skin markers, an algorithm was applied to restore the distorted test images to the reference image. The algorithm corrected torsion and rotation up to approximately 35 degrees. The radius of the erythemal reaction and average value of the input image closely matched that of the reference image's 'true value'. The image 'de-warping' procedure improves the robustness of the response image evaluation in a clinical research setting and opens the possibility of the correction of possibly flawed images performed away from the laboratory setting by the subject/patient themselves. This opportunity may increase the use of photo-testing and, by extension, other late response skin testing where the necessity of a return assessment visit is a disincentive to performance of the test. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
High-resolution molecular-beam spectroscopy of NaCN and Na 13CN
NASA Astrophysics Data System (ADS)
van Vaals, J. J.; Meerts, W. Leo; Dymanus, A.
The sodium cyanide molecule was studied by molecular-beam electric-resonance spectroscopy in the microwave region. We used the seeded-beam technique to produce a supersonic beam with strong translational, rotational and vibrational cooling. In the frequency range 9.5-40 GHz we observed and identified for NaCN 186 and for Na 13CN 107 hyperfine transitions in 20 and 16 rotational transitions, respectively, all in the ground vibrational state. The rotational, the five quartic and three sextic centrifugal distortion constants of NaCN are: A″ = 57921.954(7) MHz; B″ = 8369.312(2) MHz, C″ = 7272.712(2) MHz. All quadrupole and several spin-rotation coupling constants for the hyperfine interaction were evaluated. The quadrupole coupling constants (in MHz) for NaCN are: eQq12(Na) = -5.344(5), eQq12 = 2.397(7). eQq12(N) = 2.148(4), eQq12(N) = -4.142(5). From these constants and those of Na 13CN we have determined the principal components of the quadrupole coupling tensor for potassium and nitrogen. The structure of sodium cyanide evaluated from the rotational constants of NaCN and Na 13CN was found to be T shaped, similar to the structure of KCN but completely different from the linear isocyanide configuration of LiNC. The effective structural parameters for sodium cyanide in the ground vibrational state are: rCN = 1.170(4) Å, rNaC = 2.379(15) Å, rN12N = 2.233(15) Å, in gratifying agreement with ab initio calculations. Both the geometrical structure and the hyperfine coupling justify the conclusion that the CN group in gaseous sodium cyanide approximately can be considered as a free CN - ion.
An improved finger-vein recognition algorithm based on template matching
NASA Astrophysics Data System (ADS)
Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping
2016-10-01
Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.
Full-Spectrum-Analysis Isotope ID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee; Thoreson, Gregory G.
2017-06-28
FSAIsotopeID analyzes gamma ray spectra to identify radioactive isotopes (radionuclides). The algorithm fits the entire spectrum with combinations of pre-computed templates for a comprehensive set of radionuclides with varying thicknesses and compositions of shielding materials. The isotope identification algorithm is suitable for the analysis of spectra collected by gamma-ray sensors ranging from medium-resolution detectors, such a NaI, to high-resolution detectors, such as HPGe. In addition to analyzing static measurements, the isotope identification algorithm is applied for the radiation search applications. The search subroutine maintains a running background spectrum that is passed to the isotope identification algorithm, and it also selectsmore » temporal integration periods that optimize the responsiveness and sensitivity. Gain stabilization is supported for both types of applications.« less
Solving the Mechanism of Na+/H+ Antiporters Using Molecular Dynamics Simulations
NASA Astrophysics Data System (ADS)
Dotson, David L.
Na+/H+ antiporters are vital membrane proteins for cell homeostasis, transporting Na+ ions in exchange for H+ across the lipid bilayer. In humans, dysfunction of these transporters are implicated in hypertension, heart failure, epilepsy, and autism, making them well-established drug targets. Although experimental structures for bacterial homologs of the human Na+/H+ have been obtained, the detailed mechanism for ion transport is still not well-understood. The most well-studied of these transporters, Escherichia coli NhaA, known to transport 2 H+ for every Na+ extruded, was recently shown to bind H+ and Na+ at the same binding site, for which the two ion species compete. Using molecular dynamics simulations, the work presented in this dissertation shows that Na+ binding disrupts a previously-unidentified salt bridge between two conserved residues, suggesting that one of these residues, Lys300, may participate directly in transport of H+. This work also demonstrates that the conformational change required for ion translocation in a homolog of NhaA, Thermus thermophilus NapA, thought by some to involve only small helical movements at the ion binding site, is a large-scale, rigid-body movement of the core domain relative to the dimerization domain. This elevator-like transport mechanism translates a bound Na+ up to 10 A across the membrane. These findings constitute a major shift in the prevailing thought on the mechanism of these transporters, and serve as an exciting launchpad for new developments toward understanding that mechanism in detail.
Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei
2014-09-01
In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
Comments on Samal and Henderson: Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swain, M.J.
Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less
A Lightweight Hierarchical Activity Recognition Framework Using Smartphone Sensors
Han, Manhyung; Bang, Jae Hun; Nugent, Chris; McClean, Sally; Lee, Sungyoung
2014-01-01
Activity recognition for the purposes of recognizing a user's intentions using multimodal sensors is becoming a widely researched topic largely based on the prevalence of the smartphone. Previous studies have reported the difficulty in recognizing life-logs by only using a smartphone due to the challenges with activity modeling and real-time recognition. In addition, recognizing life-logs is difficult due to the absence of an established framework which enables the use of different sources of sensor data. In this paper, we propose a smartphone-based Hierarchical Activity Recognition Framework which extends the Naïve Bayes approach for the processing of activity modeling and real-time activity recognition. The proposed algorithm demonstrates higher accuracy than the Naïve Bayes approach and also enables the recognition of a user's activities within a mobile environment. The proposed algorithm has the ability to classify fifteen activities with an average classification accuracy of 92.96%. PMID:25184486
Delay-based virtual congestion control in multi-tenant datacenters
NASA Astrophysics Data System (ADS)
Liu, Yuxin; Zhu, Danhong; Zhang, Dong
2018-03-01
With the evolution of cloud computing and virtualization, the congestion control of virtual datacenters has become the basic issue for multi-tenant datacenters transmission. Regarding to the friendly conflict of heterogeneous congestion control among multi-tenant, this paper proposes a delay-based virtual congestion control, which translates the multi-tenant heterogeneous congestion control into delay-based feedback uniformly by setting the hypervisor translation layer, modifying three-way handshake of explicit feedback and packet loss feedback and throttling receive window. The simulation results show that the delay-based virtual congestion control can effectively solve the unfairness of heterogeneous feedback congestion control algorithms.
A decoupled recursive approach for constrained flexible multibody system dynamics
NASA Technical Reports Server (NTRS)
Lai, Hao-Jan; Kim, Sung-Soo; Haug, Edward J.; Bae, Dae-Sung
1989-01-01
A variational-vector calculus approach is employed to derive a recursive formulation for dynamic analysis of flexible multibody systems. Kinematic relationships for adjacent flexible bodies are derived in a companion paper, using a state vector notation that represents translational and rotational components simultaneously. Cartesian generalized coordinates are assigned for all body and joint reference frames, to explicitly formulate deformation kinematics under small deformation kinematics and an efficient flexible dynamics recursive algorithm is developed. Dynamic analysis of a closed loop robot is performed to illustrate efficiency of the algorithm.
Predicting Sepsis Risk Using the "Sniffer" Algorithm in the Electronic Medical Record.
Olenick, Evelyn M; Zimbro, Kathie S; DʼLima, Gabrielle M; Ver Schneider, Patricia; Jones, Danielle
The Sepsis "Sniffer" Algorithm (SSA) has merit as a digital sepsis alert but should be considered an adjunct to versus an alternative for the Nurse Screening Tool (NST), given lower specificity and positive predictive value. The SSA reduced the risk of incorrectly categorizing patients at low risk for sepsis, detected sepsis high risk in half the time, and reduced redundant NST screens by 70% and manual screening hours by 64% to 72%. Preserving nurse hours expended on manual sepsis alerts may translate into time directed toward other patient priorities.
1986-08-01
SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTIONAVAILABILITY OF REPORT N/A \\pproved for public release, 21b. OECLASS FI) CAT ) ON/OOWNGRAOING SCMEOLLE...from this set of projections. The Convolution Back-Projection (CBP) algorithm is widely used technique in Computer Aide Tomography ( CAT ). In this work...University of Illinois at Urbana-Champaign. 1985 Ac % DTICEl_ FCTE " AUG 1 11986 Urbana. Illinois U,) I A NEW METHOD OF SYNTHETIC APERTURE RADAR IMAGE
Crowley, Rebecca S; Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael
2010-01-01
The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei
2018-04-01
Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.
Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie
2018-01-01
Abstract Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. PMID:29106630
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
A study on the performance comparison of metaheuristic algorithms on the learning of neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2017-08-01
The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.
di Pietro, Magali; Vialaret, Jérôme; Li, Guo-Wei; Hem, Sonia; Prado, Karine; Rossignol, Michel; Maurel, Christophe; Santoni, Véronique
2013-12-01
In plants, aquaporins play a crucial role in regulating root water transport in response to environmental and physiological cues. Controls achieved at the post-translational level are thought to be of critical importance for regulating aquaporin function. To investigate the general molecular mechanisms involved, we performed, using the model species Arabidopsis, a comprehensive proteomic analysis of root aquaporins in a large set of physiological contexts. We identified nine physiological treatments that modulate root hydraulics in time frames of minutes (NO and H2O2 treatments), hours (mannitol and NaCl treatments, exposure to darkness and reversal with sucrose, phosphate supply to phosphate-starved roots), or days (phosphate or nitrogen starvation). All treatments induced inhibition of root water transport except for sucrose supply to dark-grown plants and phosphate resupply to phosphate-starved plants, which had opposing effects. Using a robust label-free quantitative proteomic methodology, we identified 12 of 13 plasma membrane intrinsic protein (PIP) aquaporin isoforms, 4 of the 10 tonoplast intrinsic protein isoforms, and a diversity of post-translational modifications including phosphorylation, methylation, deamidation, and acetylation. A total of 55 aquaporin peptides displayed significant changes after treatments and enabled the identification of specific and as yet unknown patterns of response to stimuli. The data show that the regulation of PIP and tonoplast intrinsic protein abundance was involved in response to a few treatments (i.e. NaCl, NO, and nitrate starvation), whereas changes in the phosphorylation status of PIP aquaporins were positively correlated to changes in root hydraulic conductivity in the whole set of treatments. The identification of in vivo deamidated forms of aquaporins and their stimulus-induced changes in abundance may reflect a new mechanism of aquaporin regulation. The overall work provides deep insights into the in vivo post-translational events triggered by environmental constraints and their possible role in regulating plant water status.
Translational-circular scanning for magneto-acoustic tomography with current injection.
Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng
2016-01-27
Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Supercooling of aqueous NaCl and KCl solutions under acoustic levitation.
Lü, Y J; Wei, B
2006-10-14
The supercooling capability of aqueous NaCl and KCl solutions is investigated at containerless state by using acoustic levitation method. The supercooling of water is obviously enhanced by the alkali metal ions and increases linearly with the augmentation of concentrations. Furthermore, the supercooling depends on the nature of ions and is 2-3 K larger for NaCl solution than that for KCl solution in the present concentration range: Molecular dynamics simulations are performed to reveal the intrinsic correlation between supercoolability and microstructure. The translational and orientational order parameters are applied to quantitatively demonstrate the effect of ionic concentration on the hydrogen-bond network and ice melting point. The disrupted hydrogen-bond structure determines essentially the concentration dependence of supercooling. On the other hand, the introduced acoustic pressure suppresses the increase of supercooling by promoting the growth and coalescence of microbubbles, the effective nucleation catalysts, in water. However, the dissolved ions can weaken this effect, and moreover the degree varies with the ion type. This results in the different supercoolability for NaCl and KCl solutions under the acoustic levitation conditions.
Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei
2016-03-11
This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
Classification of posture maintenance data with fuzzy clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1992-01-01
Sensory inputs from the visual, vestibular, and proprioreceptive systems are integrated by the central nervous system to maintain postural equilibrium. Sustained exposure to microgravity causes neurosensory adaptation during spaceflight, which results in decreased postural stability until readaptation occurs upon return to the terrestrial environment. Data which simulate sensory inputs under various sensory organization test (SOT) conditions were collected in conjunction with Johnson Space Center postural control studies using a tilt-translation device (TTD). The University of West Florida applied the fuzzy c-meams (FCM) clustering algorithms to this data with a view towards identifying various states and stages of subjects experiencing such changes. Feature analysis, time step analysis, pooling data, response of the subjects, and the algorithms used are discussed.
Quantum gates with controlled adiabatic evolutions
NASA Astrophysics Data System (ADS)
Hen, Itay
2015-02-01
We introduce a class of quantum adiabatic evolutions that we claim may be interpreted as the equivalents of the unitary gates of the quantum gate model. We argue that these gates form a universal set and may therefore be used as building blocks in the construction of arbitrary "adiabatic circuits," analogously to the manner in which gates are used in the circuit model. One implication of the above construction is that arbitrary classical boolean circuits as well as gate model circuits may be directly translated to adiabatic algorithms with no additional resources or complexities. We show that while these adiabatic algorithms fail to exhibit certain aspects of the inherent fault tolerance of traditional quantum adiabatic algorithms, they may have certain other experimental advantages acting as quantum gates.
A portable approach for PIC on emerging architectures
NASA Astrophysics Data System (ADS)
Decyk, Viktor
2016-03-01
A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.
Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”
Schroeder, Christopher L.; Hartmann, Mitra J. Z.
2012-01-01
When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641
Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".
Schroeder, Christopher L; Hartmann, Mitra J Z
2012-01-01
When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.
Registration of 3D spectral OCT volumes combining ICP with a graph-based approach
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan
2012-02-01
The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Signorini, Sergio
2002-01-01
Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
2003-01-01
NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
1983-10-01
Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.
Rotation of a synchronous viscoelastic shell
NASA Astrophysics Data System (ADS)
Noyelles, Benoît
2018-03-01
Several natural satellites of the giant planets have shown evidence of a global internal ocean, coated by a thin, icy crust. This crust is probably viscoelastic, which would alter its rotational response. This response would translate into several rotational quantities, i.e. the obliquity, and the librations at different frequencies, for which the crustal elasticity reacts differently. This study aims at modelling the global response of the viscoelastic crust. For that, I derive the time-dependence of the tensor of inertia, which I combine with the time evolution of the rotational quantities, thanks to an iterative algorithm. This algorithm combines numerical simulations of the rotation with a digital filtering of the resulting tensor of inertia. The algorithm works very well in the elastic case, provided the problem is not resonant. However, considering tidal dissipation adds different phase lags to the oscillating contributions, which challenge the convergence of the algorithm.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D
2017-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117
Rossetti, Paolo; Bondia, Jorge; Vehí, Josep; Fanelli, Carmine G.
2010-01-01
Evaluation of metabolic control of diabetic people has been classically performed measuring glucose concentrations in blood samples. Due to the potential improvement it offers in diabetes care, continuous glucose monitoring (CGM) in the subcutaneous tissue is gaining popularity among both patients and physicians. However, devices for CGM measure glucose concentration in compartments other than blood, usually the interstitial space. This means that CGM need calibration against blood glucose values, and the accuracy of the estimation of blood glucose will also depend on the calibration algorithm. The complexity of the relationship between glucose dynamics in blood and the interstitial space, contrasts with the simplistic approach of calibration algorithms currently implemented in commercial CGM devices, translating in suboptimal accuracy. The present review will analyze the issue of calibration algorithms for CGM, focusing exclusively on the commercially available glucose sensors. PMID:22163505
Fast prediction of RNA-RNA interaction using heuristic algorithm.
Montaseri, Soheila
2015-01-01
Interaction between two RNA molecules plays a crucial role in many medical and biological processes such as gene expression regulation. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. Some algorithms have been formed to predict the structure of the RNA-RNA interaction. High computational time is a common challenge in most of the presented algorithms. In this context, a heuristic method is introduced to accurately predict the interaction between two RNAs based on minimum free energy (MFE). This algorithm uses a few dot matrices for finding the secondary structure of each RNA and binding sites between two RNAs. Furthermore, a parallel version of this method is presented. We describe the algorithm's concurrency and parallelism for a multicore chip. The proposed algorithm has been performed on some datasets including CopA-CopT, R1inv-R2inv, Tar-Tar*, DIS-DIS, and IncRNA54-RepZ in Escherichia coli bacteria. The method has high validity and efficiency, and it is run in low computational time in comparison to other approaches.
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
NASA Technical Reports Server (NTRS)
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
Na Mele Ho Ona Auao (The Songs That Instruct). Hawaiian Studies Music Resource Book.
ERIC Educational Resources Information Center
Hawaii State Dept. of Education, Honolulu. Office of Instructional Services.
This music resource book is a compilation of traditional Hawaiian mele (songs) for use as a tool in music instruction and as a means to educate students in both the Hawaiian language and in various aspects of Hawaiian culture. Music and words are provided for each song as well as an English translation. The first section is comprised of songs or…
Rapid activation of gill Na+,K+-ATPase in the euryhaline teleost Fundulus heteroclitus
Mancera, J.M.; McCormick, S.D.
2000-01-01
The rapid activation of gill Na+,K+-ATPase was analyzed in the mummichog (Fundulus heteroclitus) and Atlantic salmon (Salmo salar) transferred from low salinity (0.1 ppt) to high salinity (25-35 ppt). In parr and presmolt, Salmo salar gill Na+,K+-ATPase activity started to increase 3 days after transfer. Exposure of Fundulus heteroclitus to 35 ppt seawater (SW) induced a rise in gill Na+,K+-ATPase activity 3 hr after transfer. After 12 hr, the values dropped to initial levels but showed a second significant increase 3 days after transfer. The absence of detergent in the enzyme assay resulted in lower values of gill Na+,K+-ATPase, and the rapid increase after transfer to SW was not observed. Na+,K+-ATPase activity of gill filaments in vitro for 3 hr increased proportionally to the osmolality of the culture medium (600 mosm/kg > 500 mosm/kg > 300 mosm/kg). Osmolality of 800 mosm/kg resulted in lower gill Na+,K+-ATPase activity relative to 600 mosm/kg. Increasing medium osmolality to 600 mosm/kg with mannitol also increased gill Na+,K+-ATPase. Cycloheximide inhibited the increase in gill Na+,K+-ATPase activity observed in hyperosmotic medium in a dose-dependent manner (10-4 M > 10-5 M > 10-6 M). Actinomycin D or bumetanide in the culture (doses of 10-4 M, 10-5 M, and 10-6 M) did not affect gill Na+,K+-ATPase. Injection of fish with actinomycin D prior to gill organ culture, however, prevented the increase in gill Na+,K+-ATPase activity in hyperosmotic media. The results show a very rapid and transitory increase in gill Na+,K+-ATPase activity in the first hours after the transfer of Fundulus heteroclitus to SW that is dependent on translational and transcriptional processes. (C) 2000 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Georg, E-mail: georg.schramm@kuleuven.be; Maus, Jens; Hofheinz, Frank
Purpose: MR-based attenuation correction (MRAC) in routine clinical whole-body positron emission tomography and magnetic resonance imaging (PET/MRI) is based on tissue type segmentation. Due to lack of MR signal in cortical bone and the varying signal of spongeous bone, standard whole-body segmentation-based MRAC ignores the higher attenuation of bone compared to the one of soft tissue (MRAC{sub nobone}). The authors aim to quantify and reduce the bias introduced by MRAC{sub nobone} in the standard uptake value (SUV) of spinal and pelvic lesions in 20 PET/MRI examinations with [{sup 18}F]NaF. Methods: The authors reconstructed 20 PET/MR [{sup 18}F]NaF patient data setsmore » acquired with a Philips Ingenuity TF PET/MRI. The PET raw data were reconstructed with two different attenuation images. First, the authors used the vendor-provided MRAC algorithm that ignores the higher attenuation of bone to reconstruct PET{sub nobone}. Second, the authors used a threshold-based algorithm developed in their group to automatically segment bone structures in the [{sup 18}F]NaF PET images. Subsequently, an attenuation coefficient of 0.11 cm{sup −1} was assigned to the segmented bone regions in the MRI-based attenuation image (MRAC{sub bone}) which was used to reconstruct PET{sub bone}. The automatic bone segmentation algorithm was validated in six PET/CT [{sup 18}F]NaF examinations. Relative SUV{sub mean} and SUV{sub max} differences between PET{sub bone} and PET{sub nobone} of 8 pelvic and 41 spinal lesions, and of other regions such as lung, liver, and bladder, were calculated. By varying the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}, the authors investigated its influence on the reconstructed SUVs of the lesions. Results: The comparison of [{sup 18}F]NaF-based and CT-based bone segmentation in the six PET/CT patients showed a Dice similarity of 0.7 with a true positive rate of 0.72 and a false discovery rate of 0.33. The [{sup 18}F]NaF-based bone segmentation worked well in the pelvis and spine. However, it showed artifacts in the skull and in the extremities. The analysis of the 20 [{sup 18}F]NaF PET/MRI examinations revealed relative SUV{sub max} differences between PET{sub nobone} and PET{sub bone} of (−8.8% ± 2.7%, p = 0.01) and (−8.1% ± 1.9%, p = 2.4 × 10{sup −8}) in pelvic and spinal lesions, respectively. A maximum SUV{sub max} underestimation of −13.7% was found in lesion in the third cervical spine. The averaged SUV{sub mean} differences in volumes of interests in lung, liver, and bladder were below 3%. The average SUV{sub max} differences in pelvic and spinal lesions increased from −9% to −18% and −8% to −17%, respectively, when increasing the assigned bone attenuation coefficient from 0.11 to 0.13 cm{sup −1}. Conclusions: The developed automatic [{sup 18}F]NaF PET-based bone segmentation allows to include higher bone attenuation in whole-body MRAC and thus improves quantification accuracy for pelvic and spinal lesions in [{sup 18}F]NaF PET/MRI examinations. In nonbone structures (e.g., lung, liver, and bladder), MRAC{sub nobone} yields clinically acceptable accuracy.« less
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
NASA Astrophysics Data System (ADS)
Qiu, Zhi-cheng; Wang, Bin; Zhang, Xian-min; Han, Jian-da
2013-04-01
This study presents a novel translating piezoelectric flexible manipulator driven by a rodless cylinder. Simultaneous positioning control and vibration suppression of the flexible manipulator is accomplished by using a hybrid driving scheme composed of the pneumatic cylinder and a piezoelectric actuator. Pulse code modulation (PCM) method is utilized for the cylinder. First, the system dynamics model is derived, and its standard multiple input multiple output (MIMO) state-space representation is provided. Second, a composite proportional derivative (PD) control algorithms and a direct adaptive fuzzy control method are designed for the MIMO system. Also, a time delay compensation algorithm, bandstop and low-pass filters are utilized, under consideration of the control hysteresis and the caused high-frequency modal vibration due to the long stroke of the cylinder, gas compression and nonlinear factors of the pneumatic system. The convergence of the closed loop system is analyzed. Finally, experimental apparatus is constructed and experiments are conducted. The effectiveness of the designed controllers and the hybrid driving scheme is verified through simulation and experimental comparison studies. The numerical simulation and experimental results demonstrate that the proposed system scheme of employing the pneumatic drive and piezoelectric actuator can suppress the vibration and achieve the desired positioning location simultaneously. Furthermore, the adopted adaptive fuzzy control algorithms can significantly enhance the control performance.
Bioinformatics in translational drug discovery.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
2017-08-31
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
AUTOMATIC GENERATION OF FFT FOR TRANSLATIONS OF MULTIPOLE EXPANSIONS IN SPHERICAL HARMONICS
Mirkovic, Dragan; Pettitt, B. Montgomery; Johnsson, S. Lennart
2009-01-01
The fast multipole method (FMM) is an efficient algorithm for calculating electrostatic interactions in molecular simulations and a promising alternative to Ewald summation methods. Translation of multipole expansion in spherical harmonics is the most important operation of the fast multipole method and the fast Fourier transform (FFT) acceleration of this operation is among the fastest methods of improving its performance. The technique relies on highly optimized implementation of fast Fourier transform routines for the desired expansion sizes, which need to incorporate the knowledge of symmetries and zero elements in the input arrays. Here a method is presented for automatic generation of such, highly optimized, routines. PMID:19763233
Wavelets for sign language translation
NASA Astrophysics Data System (ADS)
Wilson, Beth J.; Anspach, Gretel
1993-10-01
Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.
Nonintegrable Schrodinger discrete breathers.
Gómez-Gardeñes, J; Floría, L M; Peyrard, M; Bishop, A R
2004-12-01
In an extensive numerical investigation of nonintegrable translational motion of discrete breathers in nonlinear Schrödinger lattices, we have used a regularized Newton algorithm to continue these solutions from the limit of the integrable Ablowitz-Ladik lattice. These solutions are shown to be a superposition of a localized moving core and an excited extended state (background) to which the localized moving pulse is spatially asymptotic. The background is a linear combination of small amplitude nonlinear resonant plane waves and it plays an essential role in the energy balance governing the translational motion of the localized core. Perturbative collective variable theory predictions are critically analyzed in the light of the numerical results.
Control of wavepacket dynamics in mixed alkali metal clusters by optimally shaped fs pulses
NASA Astrophysics Data System (ADS)
Bartelt, A.; Minemoto, S.; Lupulescu, C.; Vajda, Š.; Wöste, L.
We have performed adaptive feedback optimization of phase-shaped femtosecond laser pulses to control the wavepacket dynamics of small mixed alkali-metal clusters. An optimization algorithm based on Evolutionary Strategies was used to maximize the ion intensities. The optimized pulses for NaK and Na2K converged to pulse trains consisting of numerous peaks. The timing of the elements of the pulse trains corresponds to integer and half integer numbers of the vibrational periods of the molecules, reflecting the wavepacket dynamics in their excited states.
The Approximability of Learning and Constraint Satisfaction Problems
2010-10-07
further improved this result to NP ⊆ naPCP1,3/4+²(O(log(n)),3). Around the same time, Zwick [141] showed that naPCP1,5/8(O(log(n)),3)⊆ BPP by giving a...randomized polynomial-time 5/8-approximation algorithm for satisfiable 3CSP. Therefore unless NP⊆ BPP , the best s must be bigger than 5/8. Zwick... BPP [141]. We think that Question 5.1.2 addresses an important missing part in understanding the 3-query PCP systems. In addition, as is mentioned the
TIP: protein backtranslation aided by genetic algorithms.
Moreira, Andrés; Maass, Alejandro
2004-09-01
Several applications require the backtranslation of a protein sequence into a nucleic acid sequence. The degeneracy of the genetic code makes this process ambiguous; moreover, not every translation is equally viable. The usual answer is to mimic the codon usage of the target species; however, this does not capture all the relevant features of the 'genomic styles' from different taxa. The program TIP ' Traducción Inversa de Proteínas') applies genetic algorithms to improve the backtranslation, by minimizing the difference of some coding statistics with respect to their average value in the target. http://www.cmm.uchile.cl/genoma/tip/
On-orbit flight control algorithm description
NASA Technical Reports Server (NTRS)
1975-01-01
Algorithms are presented for rotational and translational control of the space shuttle orbiter in the orbital mission phases, which are external tank separation, orbit insertion, on-orbit and de-orbit. The program provides a versatile control system structure while maintaining uniform communications with other programs, sensors, and control effectors by using an executive routine/functional subroutine format. Software functional requirements are described using block diagrams where feasible, and input--output tables, and the software implementation of each function is presented in equations and structured flow charts. Included are a glossary of all symbols used to define the requirements, and an appendix of supportive material.
Teaching iSTART to Understand Spanish
ERIC Educational Resources Information Center
Dascalu, Mihai; Jacovina, Matthew E.; Soto, Christian M.; Allen, Laura K.; Dai, Jianmin; Guerrero, Tricia A.; McNamara, Danielle S.
2017-01-01
iSTART is a web-based reading comprehension tutor. A recent translation of iSTART from English to Spanish has made the system available to a new audience. In this paper, we outline several challenges that arose during the development process, specifically focusing on the algorithms that drive the feedback. Several iSTART activities encourage…
A comparative study of surface waves inversion techniques at strong motion recording sites in Greece
Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.
2015-01-01
Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.
Improved plutonium identification and characterization results with NaI(Tl) detector using ASEDRA
NASA Astrophysics Data System (ADS)
Detwiler, R.; Sjoden, G.; Baciak, J.; LaVigne, E.
2008-04-01
The ASEDRA algorithm (Advanced Synthetically Enhanced Detector Resolution Algorithm) is a tool developed at the University of Florida to synthetically enhance the resolved photopeaks derived from a characteristically poor resolution spectra collected at room temperature from scintillator crystal-photomultiplier detector, such as a NaI(Tl) system. This work reports on analysis of a side-by-side test comparing the identification capabilities of ASEDRA applied to a NaI(Tl) detector with HPGe results for a Plutonium Beryllium (PuBe) source containing approximately 47 year old weapons-grade plutonium (WGPu), a test case of real-world interest with a complex spectra including plutonium isotopes and 241Am decay products. The analysis included a comparison of photopeaks identified and photopeak energies between the ASEDRA and HPGe detector systems, and the known energies of the plutonium isotopes. ASEDRA's performance in peak area accuracy, also important in isotope identification as well as plutonium quality and age determination, was evaluated for key energy lines by comparing the observed relative ratios of peak areas, adjusted for efficiency and attenuation due to source shielding, to the predicted ratios from known energy line branching and source isotopics. The results show that ASEDRA has identified over 20 lines also found by the HPGe and directly correlated to WGPu energies.
NASA Astrophysics Data System (ADS)
Deyati, Avisek; Bagewadi, Shweta; Senger, Philipp; Hofmann-Apitius, Martin; Novac, Natalia
2015-01-01
miRNA plays an important role in tumourgenesis by regulating expression of oncogenes and tumour suppressors. Thus affects cell proliferation and differentiation, apoptosis, invasion and angiogenesis. miRNAs are potential biomarkers for diagnosis, prognosis and therapies of different forms of cancer. However, relationship between response of cancer patients towards targeted therapy and the resulting modifications of the miRNA transcriptome in the context of pathway regulation is poorly understood. With ever-increasing pathways and miRNA-mRNA interaction databases, freely available mRNA and miRNA expression data in multiple cancer therapy have produced an unprecedented opportunity to decipher the role of miRNAs in early prediction of therapeutic efficacy in diseases. Efficient translation of -omics data and accumulated knowledge to clinical decision-making are of paramount scientific and public health interest. Well-structured translational algorithms are needed to bridge the gap from databases to decisions. Herein, we present a novel SMARTmiR algorithm to prospectively predict the role of miRNA as therapeutic biomarker for an anti-EGFR monoclonal antibody i.e. cetuximab treatment in colorectal cancer.
R3D: Reduction Package for Integral Field Spectroscopy
NASA Astrophysics Data System (ADS)
Sánchez, Sebastián. F.
2011-06-01
R3D was developed to reduce fiber-based integral field spectroscopy (IFS) data. The package comprises a set of command-line routines adapted for each of these steps, suitable for creating pipelines. The routines have been tested against simulations, and against real data from various integral field spectrographs (PMAS, PPAK, GMOS, VIMOS and INTEGRAL). Particular attention is paid to the treatment of cross-talk. R3D unifies the reduction techniques for the different IFS instruments to a single one, in order to allow the general public to reduce different instruments data in an homogeneus, consistent and simple way. Although still in its prototyping phase, it has been proved to be useful to reduce PMAS (both in the Larr and the PPAK modes), VIMOS and INTEGRAL data. The current version has been coded in Perl, using PDL, in order to speed-up the algorithm testing phase. Most of the time critical algorithms have been translated to C[float=][/float], and it is our intention to translate all of them. However, even in this phase R3D is fast enough to produce valuable science frames in reasonable time.
Siegler, Jason C; Marshall, Paul W M; Bishop, David; Shaw, Greg; Green, Simon
2016-12-01
A large proportion of empirical research and reviews investigating the ergogenic potential of sodium bicarbonate (NaHCO 3 ) supplementation have focused predominately on performance outcomes and only speculate about underlying mechanisms responsible for any benefit. The aim of this review was to critically evaluate the influence of NaHCO 3 supplementation on mechanisms associated with skeletal muscle fatigue as it translates directly to exercise performance. Mechanistic links between skeletal muscle fatigue, proton accumulation (or metabolic acidosis) and NaHCO 3 supplementation have been identified to provide a more targeted, evidence-based approach to direct future research, as well as provide practitioners with a contemporary perspective on the potential applications and limitations of this supplement. The mechanisms identified have been broadly categorised under the sections 'Whole-body Metabolism', 'Muscle Physiology' and 'Motor Pathways', and when possible, the performance outcomes of these studies contextualized within an integrative framework of whole-body exercise where other factors such as task demand (e.g. large vs. small muscle groups), cardio-pulmonary and neural control mechanisms may outweigh any localised influence of NaHCO 3 . Finally, the 'Performance Applications' section provides further interpretation for the practitioner founded on the mechanistic evidence provided in this review and other relevant, applied NaHCO 3 performance-related studies.
Giani, Jorge F.; Janjulia, Tea; Kamat, Nikhil; Seth, Dale M.; Blackwell, Wendell-Lamar B.; Shah, Kandarp H.; Shen, Xiao Z.; Fuchs, Sebastien; Delpire, Eric; Toblli, Jorge E.; Bernstein, Kenneth E.; McDonough, Alicia A.
2014-01-01
The kidney is an important source of angiotensin-converting enzyme (ACE) in many species, including humans. However, the specific effects of local ACE on renal function and, by extension, BP control are not completely understood. We previously showed that mice lacking renal ACE, are resistant to the hypertension induced by angiotensin II infusion. Here, we examined the responses of these mice to the low-systemic angiotensin II hypertensive model of nitric oxide synthesis inhibition with L-NAME. In contrast to wild-type mice, mice without renal ACE did not develop hypertension, had lower renal angiotensin II levels, and enhanced natriuresis in response to L-NAME. During L-NAME treatment, the absence of renal ACE was associated with blunted GFR responses; greater reductions in abundance of proximal tubule Na+/H+ exchanger 3, Na+/Pi co-transporter 2, phosphorylated Na+/K+/Cl− cotransporter, and phosphorylated Na+/Cl− cotransporter; and greater reductions in abundance and processing of the γ isoform of the epithelial Na+ channel. In summary, the presence of ACE in renal tissue facilitates angiotensin II accumulation, GFR reductions, and changes in the expression levels and post-translational modification of sodium transporters that are obligatory for sodium retention and hypertension in response to nitric oxide synthesis inhibition. PMID:25012170
Liu, Wanting; Xiang, Lunping; Zheng, Tingkai; Jin, Jingjie; Zhang, Gong
2018-01-04
Translation is a key regulatory step, linking transcriptome and proteome. Two major methods of translatome investigations are RNC-seq (sequencing of translating mRNA) and Ribo-seq (ribosome profiling). To facilitate the investigation of translation, we built a comprehensive database TranslatomeDB (http://www.translatomedb.net/) which provides collection and integrated analysis of published and user-generated translatome sequencing data. The current version includes 2453 Ribo-seq, 10 RNC-seq and their 1394 corresponding mRNA-seq datasets in 13 species. The database emphasizes the analysis functions in addition to the dataset collections. Differential gene expression (DGE) analysis can be performed between any two datasets of same species and type, both on transcriptome and translatome levels. The translation indices translation ratios, elongation velocity index and translational efficiency can be calculated to quantitatively evaluate translational initiation efficiency and elongation velocity, respectively. All datasets were analyzed using a unified, robust, accurate and experimentally-verifiable pipeline based on the FANSe3 mapping algorithm and edgeR for DGE analyzes. TranslatomeDB also allows users to upload their own datasets and utilize the identical unified pipeline to analyze their data. We believe that our TranslatomeDB is a comprehensive platform and knowledgebase on translatome and proteome research, releasing the biologists from complex searching, analyzing and comparing huge sequencing data without needing local computational power. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
An extensive assessment of network alignment algorithms for comparison of brain connectomes.
Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario
2017-06-06
Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei
2016-01-01
This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372
Hücker, Sarah Maria; Simon, Svenja; Scherer, Siegfried; Neuhaus, Klaus
2017-01-01
The enteric pathogen Escherichia coli O157:H7 Sakai (EHEC) is able to grow at lower temperatures compared to commensal E. coli Growth at environmental conditions displays complex challenges different to those in a host. EHEC was grown at 37°C and at 14°C with 4% NaCl, a combination of cold and osmotic stress as present in the food chain. Comparison of RNAseq and RIBOseq data provided a snap shot of ongoing transcription and translation, differentiating transcriptional and post-transcriptional gene regulation, respectively. Indeed, cold and osmotic stress related genes are simultaneously regulated at both levels, but translational regulation clearly dominates. Special emphasis was given to genes regulated by RNA secondary structures in their 5 ' UTRs, such as RNA thermometers and riboswitches, or genes controlled by small RNAs encoded in trans The results reveal large differences in gene expression between short-time shock compared to adaptation in combined cold and osmotic stress. Whereas the majority of cold shock proteins, such as CspA, are translationally downregulated after adaptation, many osmotic stress genes are still significantly upregulated mainly translationally, but several also transcriptionally. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ishikawa, Ken; Watanabe, Miki; Kuroita, Toshihiro; Uchiyama, Ikuo; Bujnicki, Janusz M; Kawakami, Bunsei; Tanokura, Masaru; Kobayashi, Ichizo
2005-07-21
To search for restriction endonucleases, we used a novel plant-based cell-free translation procedure that bypasses the toxicity of these enzymes. To identify candidate genes, the related genomes of the hyperthermophilic archaea Pyrococcus abyssi and Pyrococcus horikoshii were compared. In line with the selfish mobile gene hypothesis for restriction-modification systems, apparent genome rearrangement around putative restriction genes served as a selecting criterion. Several candidate restriction genes were identified and then amplified in such a way that they were removed from their own translation signal. During their cloning into a plasmid, the genes became connected with a plant translation signal. After in vitro transcription by T7 RNA polymerase, the mRNAs were separated from the template DNA and translated in a wheat-germ-based cell-free protein synthesis system. The resulting solution could be directly assayed for restriction activity. We identified two deoxyribonucleases. The novel enzyme was denoted as PabI, purified and found to recognize 5'-GTAC and leave a 3'-TA overhang (5'-GTA/C), a novel restriction enzyme-generated terminus. PabI is active up to 90 degrees C and optimally active at a pH of around 6 and in NaCl concentrations ranging from 100 to 200 mM. We predict that it has a novel 3D structure.
Data depth based clustering analysis
Jeong, Myeong -Hun; Cai, Yaping; Sullivan, Clair J.; ...
2016-01-01
Here, this paper proposes a new algorithm for identifying patterns within data, based on data depth. Such a clustering analysis has an enormous potential to discover previously unknown insights from existing data sets. Many clustering algorithms already exist for this purpose. However, most algorithms are not affine invariant. Therefore, they must operate with different parameters after the data sets are rotated, scaled, or translated. Further, most clustering algorithms, based on Euclidean distance, can be sensitive to noises because they have no global perspective. Parameter selection also significantly affects the clustering results of each algorithm. Unlike many existing clustering algorithms, themore » proposed algorithm, called data depth based clustering analysis (DBCA), is able to detect coherent clusters after the data sets are affine transformed without changing a parameter. It is also robust to noises because using data depth can measure centrality and outlyingness of the underlying data. Further, it can generate relatively stable clusters by varying the parameter. The experimental comparison with the leading state-of-the-art alternatives demonstrates that the proposed algorithm outperforms DBSCAN and HDBSCAN in terms of affine invariance, and exceeds or matches the ro-bustness to noises of DBSCAN or HDBSCAN. The robust-ness to parameter selection is also demonstrated through the case study of clustering twitter data.« less
Generalized algebraic scene-based nonuniformity correction algorithm.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2005-02-01
A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.
A Global Approach to the Optimal Trajectory Based on an Improved Ant Colony Algorithm for Cold Spray
NASA Astrophysics Data System (ADS)
Cai, Zhenhua; Chen, Tingyang; Zeng, Chunnian; Guo, Xueping; Lian, Huijuan; Zheng, You; Wei, Xiaoxu
2016-12-01
This paper is concerned with finding a global approach to obtain the shortest complete coverage trajectory on complex surfaces for cold spray applications. A slicing algorithm is employed to decompose the free-form complex surface into several small pieces of simple topological type. The problem of finding the optimal arrangement of the pieces is translated into a generalized traveling salesman problem (GTSP). Owing to its high searching capability and convergence performance, an improved ant colony algorithm is then used to solve the GTSP. Through off-line simulation, a robot trajectory is generated based on the optimized result. The approach is applied to coat real components with a complex surface by using the cold spray system with copper as the spraying material.
SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).
Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J
2012-06-01
To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
DOT National Transportation Integrated Search
2016-08-09
The AASHTO codes for Load Resistance Factored Design (LRFD) regarding shallow bridge foundations : and walls have been implemented into a set of spreadsheet algorithms to facilitate the calculations of bearing : capacity and footing settlements on na...
McKiernan, C J; Friedlander, M
1999-12-31
The retinal rod Na(+)/Ca(2+),K(+) exchanger (RodX) is a polytopic membrane protein found in photoreceptor outer segments where it is the principal extruder of Ca(2+) ions during light adaptation. We have examined the role of the N-terminal 65 amino acids in targeting, translocation, and integration of the RodX using an in vitro translation/translocation system. cDNAs encoding human RodX and bovine RodX through the first transmembrane domain were correctly targeted and integrated into microsomal membranes; deletion of the N-terminal 65 amino acids (aa) resulted in a translation product that was not targeted or integrated. Deletion of the first 65 aa had no effect on membrane targeting of full-length RodX, but the N-terminal hydrophilic domain no longer translocated. Chimeric constructs encoding the first 65 aa of bovine RodX fused to globin were translocated across microsomal membranes, demonstrating that the sequence could function heterologously. Studies of fresh bovine retinal extracts demonstrated that the first 65 aa are present in the native protein. These data demonstrate that the first 65 aa of RodX constitute an uncleaved signal sequence required for the efficient membrane targeting and proper membrane integration of RodX.
Stock, Christian; Pedersen, Stine Falsig
2017-04-01
Acidosis is characteristic of the solid tumor microenvironment. Tumor cells, because they are highly proliferative and anabolic, have greatly elevated metabolic acid production. To sustain a normal cytosolic pH homeostasis they therefore need to either extrude excess protons or to neutralize them by importing HCO 3 - , in both cases causing extracellular acidification in the poorly perfused tissue microenvironment. The Na + /H + exchanger isoform 1 (NHE1) is a ubiquitously expressed acid-extruding membrane transport protein, and upregulation of its expression and/or activity is commonly correlated with tumor malignancy. The present review discusses current evidence on how altered pH homeostasis, and in particular NHE1, contributes to tumor cell motility, invasion, proliferation, and growth and facilitates evasion of chemotherapeutic cell death. We summarize data from in vitro studies, 2D-, 3D- and organotypic cell culture, animal models and human tissue, which collectively point to pH-regulation in general, and NHE1 in particular, as potential targets in combination chemotherapy. Finally, we discuss the possible pitfalls, side effects and cellular escape mechanisms that need to be considered in the process of translating the plethora of basic research data into a clinical setting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
NASA Astrophysics Data System (ADS)
Cai, X.
2017-12-01
To investigate gravity wave (GW) perturbations in the midlatitude mesopause region during boreal equinox, 433 h of continuous Na lidar full diurnal cycle temperature measurements in September between 2011 and 2015 are utilized to derive the monthly profiles of GW-induced temperature variance, T'^2, and the potential energy density (PED). Operating at Utah State University (42° N, 112° W), these lidar measurements reveal severe GW dissipation near 90 km, where both parameters drop to their minima (˜ 20 K^2 and ˜ 50 m^2/ s^2, respectively). The study also shows that GWs with periods of 3-5 h dominate the midlatitude mesopause region during the summer-winter transition. To derive the precise temperature perturbations a new tide removal algorithm suitable for all ground-based observations is developed to de-trend the lidar temperature measurements and to isolate GW-induced perturbations. It removes the tidal perturbations completely and provides the most accurate GW perturbations for the ground-based observations. This algorithm is validated by comparing the true GW perturbations in the latest mesoscale-resolving Whole Atmosphere Community Climate Model (WACCM) with those derived from the WACCM local outputs by applying this newly developed tidal removal algorithm.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
The History and Contributions of the Diabetes in Pregnancy Study Group of North America (1997-2015).
Rosen, Julie A; Langer, Oded; Reece, E Albert; Miodovnik, Menachem
2016-11-01
The Diabetes in Pregnancy Study Group of North America (DPSG-NA) was founded in 1997 in San Antonio, Texas, out of the recognition that the field of maternal-fetal medicine should support and conduct research to address the specialized needs of pregnant women with type 1, type 2, or gestational diabetes mellitus. Since its inception, the DPSG-NA meetings have become a vehicle for the dissemination of data, gathered through collaboration among basic, translational, and clinical researchers and care centers, both in the United States and abroad. Although the meetings cover a range of topics related to diabetes in pregnancy, they have often highlighted a major, timely issue. Utilizing presentations, roundtable discussions, and debates, members of the DPSG-NA discussed the latest research, treatments, and approaches to significantly improve the health and wellbeing of pregnant women with diabetes and their offspring. The following commentary highlights the major contributions of each meeting. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Optimizing doped libraries by using genetic algorithms
NASA Astrophysics Data System (ADS)
Tomandl, Dirk; Schober, Andreas; Schwienhorst, Andreas
1997-01-01
The insertion of random sequences into protein-encoding genes in combination with biologicalselection techniques has become a valuable tool in the design of molecules that have usefuland possibly novel properties. By employing highly effective screening protocols, a functionaland unique structure that had not been anticipated can be distinguished among a hugecollection of inactive molecules that together represent all possible amino acid combinations.This technique is severely limited by its restriction to a library of manageable size. Oneapproach for limiting the size of a mutant library relies on `doping schemes', where subsetsof amino acids are generated that reveal only certain combinations of amino acids in a proteinsequence. Three mononucleotide mixtures for each codon concerned must be designed, suchthat the resulting codons that are assembled during chemical gene synthesis represent thedesired amino acid mixture on the level of the translated protein. In this paper we present adoping algorithm that `reverse translates' a desired mixture of certain amino acids into threemixtures of mononucleotides. The algorithm is designed to optimally bias these mixturestowards the codons of choice. This approach combines a genetic algorithm with localoptimization strategies based on the downhill simplex method. Disparate relativerepresentations of all amino acids (and stop codons) within a target set can be generated.Optional weighing factors are employed to emphasize the frequencies of certain amino acidsand their codon usage, and to compensate for reaction rates of different mononucleotidebuilding blocks (synthons) during chemical DNA synthesis. The effect of statistical errors thataccompany an experimental realization of calculated nucleotide mixtures on the generatedmixtures of amino acids is simulated. These simulations show that the robustness of differentoptima with respect to small deviations from calculated values depends on their concomitantfitness. Furthermore, the calculations probe the fitness landscape locally and allow apreliminary assessment of its structure.
Cohesive Energies of Some Transition Metal Compounds Using Embedded Clusters
NASA Astrophysics Data System (ADS)
Press, Mehernosh Rustom
The molecular-clusters approach to electronic structure calculation is especially well-suited to the study of properties that depend primarily on the local environment of a system, especially those with no translational symmetry, e.g. systems with defects and structural deformations. The presence of the rest of the crystal environment can be accounted for approximately by embedding the cluster in a self-consistent crystal potential. This thesis makes a contribution in the area of investigating the capability of embedded molecular-clusters to yield reliable bulk structural properties. To this end, an algorithm for calculating the cohesive energies of clusters within the discrete-variational X(,(alpha)) LCAO-MO formulation is set up and verified on simple solids: Li, Na, Cu and LiF. We then use this formulation to study transition metal compounds, for which the interesting physics lies in local lattice defects, foreign impurities and structural deformations. In a self -consistent calculation of the lattice energies and stability of defect clusters in wustite, Fe(,1-x)O, corner-sharing aggregates of the 4:1 defect are identified as the most stable defect configurations due to efficient compensation of the cluster charge. The intercalation properties of layered-transition-metal-dichalcogenides continues to be a fertile experimental working area, backed by comparatively little theoretical study. We find that intercalation of ZrS(,2) with Na perturbs the valence energy level structure sufficiently to induce a more ionic Zr-S bond, a narrowing of the optical gap and filling of the lowest unoccupied host lattice orbitals with the electron donated by Na. Fe - intercalation in ZrS(,2) is accommodated via a strong Fe-S bond, impurity-like band levels in the optical gap of the host and hybridization-driven compression and lowering of the conduction band energy levels. The piezoelectric cuprous halides, CuCl and CuBr, exhibit a host of intriguing properties due to a filled and very active d('10) shell at the Fermi energy. A self-consistent calculation via energy minimization of the internal strain in these compounds shows both Cu-halide bonds to be very rigid with little charge delocalization under strain. Piezoelectric response is calculated in terms of effective charges and quadrupolar moments, e(,T) and (DELTA)Q.
Do group 1 metal salts form deep eutectic solvents?
Abbott, A P; D'Agostino, C; Davis, S J; Gladden, L F; Mantle, M D
2016-09-14
Mixtures of metal salts such as ZnCl 2 , AlCl 3 and CrCl 3 ·6H 2 O form eutectic mixtures with complexing agents, such as urea. The aim of this research was to see if alkali metal salts also formed eutectics in the same way. It is shown that only a limited number of sodium salts form homogeneous liquids at ambient temperatures and then only with glycerol. None of these mixtures showed eutectic behaviour but the liquids showed the physical properties similar to the group of mixtures classified as deep eutectic solvents. This study focussed on four sodium salts: NaBr, NaOAc, NaOAc·3H 2 O and Na 2 B 4 O 7 ·10H 2 O. The ionic conductivity and viscosity of these salts with glycerol were studied, and it was found that unlike previous studies of quaternary ammonium salts with glycerol, where the salt decreased the viscosity, most of the sodium salts increased the viscosity. This suggests that sodium salts have a structure making effect on glycerol. This phenomenon is probably due to the high charge density of Na + , which coordinates to the glycerol. 1 H and 23 Na NMR diffusion and relaxation methods have been used to understand the molecular dynamics in the glycerol-salt mixtures, and probe the effect of water on some of these systems. The results reveal a complex dynamic behaviour of the different species within these liquids. Generally, the translational dynamics of the 1 H species, probed by means of PFG NMR diffusion coefficients, is in line with the viscosity of these liquids. However, 1 H and 23 Na T 1 relaxation measurements suggest that the Na-containing species also play a crucial role in the structure of the liquids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perk, T; Bradshaw, T; Muzahir, S
2014-06-15
Purpose: [F-18]NaF PET can be used to image bone metastases; however, tracer uptake in degenerative joint disease (DJD) often appears similar to metastases. This study aims to develop and compare different machine learning algorithms to automatically identify regions of [F-18]NaF scans that correspond to DJD. Methods: 10 metastatic prostate cancer patients received whole body [F-18]NaF PET/CT scans prior to treatment. Image segmentation resulted in 852 ROIs, 69 of which were identified by a nuclear medicine physician as DJD. For all ROIs, various PET and CT textural features were computed. ROIs were divided into training and testing sets used to trainmore » eight different machine learning classifiers. Classifiers were evaluated based on receiver operating characteristics area under the curve (AUC), sensitivity, specificity, and positive predictive value (PPV). We also assessed the added value of including CT features in addition to PET features for training classifiers. Results: The training set consisted of 37 DJD ROIs with 475 non-DJD ROIs, and the testing set consisted of 32 DJD ROIs with 308 non-DJD ROIs. Of all classifiers, generalized linear models (GLM), decision forests (DF), and support vector machines (SVM) had the best performance. AUCs of GLM (0.929), DF (0.921), and SVM (0.889) were significantly higher than the other models (p<0.001). GLM and DF, overall, had the best sensitivity, specificity, and PPV, and gave a significantly better performance (p<0.01) than all other models. PET/CT GLM classifiers had higher AUC than just PET or just CT. GLMs built using PET/CT information had superior or comparable sensitivities, specificities and PPVs to just PET or just CT. Conclusion: Machine learning algorithms trained with PET/CT features were able to identify some cases of DJD. GLM outperformed the other classification algorithms. Using PET and CT information together was shown to be superior to using PET or CT features alone. Research supported by the Prostate Cancer Foundation.« less
Locus ceruleus control of state-dependent gene expression.
Cirelli, Chiara; Tononi, Giulio
2004-06-09
Wakefulness and sleep are accompanied by changes in behavior and neural activity, as well as by the upregulation of different functional categories of genes. However, the mechanisms responsible for such state-dependent changes in gene expression are unknown. Here we investigate to what extent state-dependent changes in gene expression depend on the central noradrenergic (NA) system, which is active in wakefulness and reduces its firing during sleep. We measured the levels of approximately 5000 transcripts expressed in the cerebral cortex of control rats and in rats pretreated with DSP-4 [N-(2-chloroethyl)-N-ethyl-2-bromobenzylamine], a neurotoxin that removes the noradrenergic innervation of the cortex. We found that NA depletion reduces the expression of approximately 20% of known wakefulness-related transcripts. Most of these transcripts are involved in synaptic plasticity and in the cellular response to stress. In contrast, NA depletion increased the expression of the sleep-related gene encoding the translation elongation factor 2. These results indicate that the activity of the central NA system during wakefulness modulates neuronal transcription to favor synaptic potentiation and counteract cellular stress, whereas its inactivity during sleep may play a permissive role to enhance brain protein synthesis.
Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing
ERIC Educational Resources Information Center
Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel
2016-01-01
Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…
Developing fire management mixes for fire program planning
Armando González-Cabán; Patricia B. Shinkle; Thomas J. Mills
1986-01-01
Evaluating economic efficiency of fire management program options requires information on the firefighting inputs, such as vehicles and crews, that would be needed to execute the program option selected. An algorithm was developed to translate automatically dollars allocated to type of firefighting inputs to numbers of units, using a set of weights for a specific fire...
NASA Astrophysics Data System (ADS)
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
A dual-processor multi-frequency implementation of the FINDS algorithm
NASA Technical Reports Server (NTRS)
Godiwala, Pankaj M.; Caglayan, Alper K.
1987-01-01
This report presents a parallel processing implementation of the FINDS (Fault Inferring Nonlinear Detection System) algorithm on a dual processor configured target flight computer. First, a filter initialization scheme is presented which allows the no-fail filter (NFF) states to be initialized using the first iteration of the flight data. A modified failure isolation strategy, compatible with the new failure detection strategy reported earlier, is discussed and the performance of the new FDI algorithm is analyzed using flight recorded data from the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. The results show that low level MLS, IMU, and IAS sensor failures are detected and isolated instantaneously, while accelerometer and rate gyro failures continue to take comparatively longer to detect and isolate. The parallel implementation is accomplished by partitioning the FINDS algorithm into two parts: one based on the translational dynamics and the other based on the rotational kinematics. Finally, a multi-rate implementation of the algorithm is presented yielding significantly low execution times with acceptable estimation and FDI performance.
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R
2016-07-07
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
NASA Astrophysics Data System (ADS)
Khan, Asif; Ryoo, Chang-Kyung; Kim, Heung Soo
2017-04-01
This paper presents a comparative study of different classification algorithms for the classification of various types of inter-ply delaminations in smart composite laminates. Improved layerwise theory is used to model delamination at different interfaces along the thickness and longitudinal directions of the smart composite laminate. The input-output data obtained through surface bonded piezoelectric sensor and actuator is analyzed by the system identification algorithm to get the system parameters. The identified parameters for the healthy and delaminated structure are supplied as input data to the classification algorithms. The classification algorithms considered in this study are ZeroR, Classification via regression, Naïve Bayes, Multilayer Perceptron, Sequential Minimal Optimization, Multiclass-Classifier, and Decision tree (J48). The open source software of Waikato Environment for Knowledge Analysis (WEKA) is used to evaluate the classification performance of the classifiers mentioned above via 75-25 holdout and leave-one-sample-out cross-validation regarding classification accuracy, precision, recall, kappa statistic and ROC Area.
Comparative study of classification algorithms for immunosignaturing data
2012-01-01
Background High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data. Results We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy. Conclusions ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties. PMID:22720696
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
Methods for coherent lensless imaging and X-ray wavefront measurements
NASA Astrophysics Data System (ADS)
Guizar Sicairos, Manuel
X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)
6.7 radio sky mapping from satellites at very low frequencies
NASA Technical Reports Server (NTRS)
Storey, L. R. O.
1991-01-01
Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.
Radio sky mapping from satellites at very low frequencies
NASA Technical Reports Server (NTRS)
Storey, L. R. O.
1991-01-01
Wave Distribution Function (WDF) analysis is a procedure for making sky maps of the sources of natural electromagnetic waves in space plasmas, given local measurements of some or all of the three magnetic and three electric field components. The work that still needs to be done on this subject includes solving basic methodological problems, translating the solution into efficient algorithms, and embodying the algorithms in computer software. One important scientific use of WDF analysis is to identify the mode of origin of plasmaspheric hiss. Some of the data from the Japanese satellite Akebono (EXOS D) are likely to be suitable for this purpose.
Model-based vision using geometric hashing
NASA Astrophysics Data System (ADS)
Akerman, Alexander, III; Patton, Ronald
1991-04-01
The Geometric Hashing technique developed by the NYU Courant Institute has been applied to various automatic target recognition applications. In particular, I-MATH has extended the hashing algorithm to perform automatic target recognition ofsynthetic aperture radar (SAR) imagery. For this application, the hashing is performed upon the geometric locations of dominant scatterers. In addition to being a robust model-based matching algorithm -- invariant under translation, scale, and 3D rotations of the target -- hashing is of particular utility because it can still perform effective matching when the target is partially obscured. Moreover, hashing is very amenable to a SIMD parallel processing architecture, and thus potentially realtime implementable.
FPGA implementation of digital down converter using CORDIC algorithm
NASA Astrophysics Data System (ADS)
Agarwal, Ashok; Lakshmi, Boppana
2013-01-01
In radio receivers, Digital Down Converters (DDC) are used to translate the signal from Intermediate Frequency level to baseband. It also decimates the oversampled signal to a lower sample rate, eliminating the need of a high end digital signal processors. In this paper we have implemented architecture for DDC employing CORDIC algorithm, which down converts an IF signal of 70MHz (3G) to 200 KHz baseband GSM signal, with an SFDR greater than 100dB. The implemented architecture reduces the hardware resource requirements by 15 percent when compared with other architecture available in the literature due to elimination of explicit multipliers and a quadrature phase shifter for mixing.
Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions
NASA Technical Reports Server (NTRS)
DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.
2008-01-01
bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
A Theoretical Analysis of Why Hybrid Ensembles Work
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296
Chemodynamical Clustering Applied to APOGEE Data: Rediscovering Globular Clusters
NASA Astrophysics Data System (ADS)
Chen, Boquan; D’Onghia, Elena; Pardy, Stephen A.; Pasquali, Anna; Bertelli Motta, Clio; Hanlon, Bret; Grebel, Eva K.
2018-06-01
We have developed a novel technique based on a clustering algorithm that searches for kinematically and chemically clustered stars in the APOGEE DR12 Cannon data. As compared to classical chemical tagging, the kinematic information included in our methodology allows us to identify stars that are members of known globular clusters with greater confidence. We apply our algorithm to the entire APOGEE catalog of 150,615 stars whose chemical abundances are derived by the Cannon. Our methodology found anticorrelations between the elements Al and Mg, Na and O, and C and N previously identified in the optical spectra in globular clusters, even though we omit these elements in our algorithm. Our algorithm identifies globular clusters without a priori knowledge of their locations in the sky. Thus, not only does this technique promise to discover new globular clusters, but it also allows us to identify candidate streams of kinematically and chemically clustered stars in the Milky Way.
Fontibón, Luis Fernando; Ardila, Sandra Liliana; Sánchez, Ricardo
Quality of life is an important outcome in paediatric cancer patients. Its evaluation at different times during the clinical course of the disease is essential for clinical practice focused on the needs of the patients. There is not a specific assessment tool for this purpose in Colombia. To perform the cultural adaptation of the quality of life scale PedsQL (Paediatric Quality of Life) Cancer Module, Version 3.0 for use in Colombia. Permission for use of the scale was obtained and the algorithm steps of the Mapi Research Trust group were followed: Direct and independent translations of scale by two native Colombian Spanish speaking translators, obtaining a preliminary version from the translations. This was followed by a back translation by a native English speaking translator and a review of the process by the author of the scale, inclusion of suggestions, and implementation of the pilot study. Direct translations were similar in the instructions and response options; a consensus meeting in 8 of the 27 items was required to choose the best translation. The author made no suggestions and gave his endorsement to the implementation of the pilot, in which, 2 items were modified in order to improve their understanding. There is a Colombian Spanish version of the PedsQL questionnaire 3.0 Cancer Module, to be submitted for a validation study prior to its use in the assessment of quality of life in paediatric cancer patients. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Revised motion estimation algorithm for PROPELLER MRI.
Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R
2014-08-01
To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.
Control Allocation with Load Balancing
NASA Technical Reports Server (NTRS)
Bodson, Marc; Frost, Susan A.
2009-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.
Parametric Quantum Search Algorithm as Quantum Walk: A Quantum Simulation
NASA Astrophysics Data System (ADS)
Ellinas, Demosthenes; Konstandakis, Christos
2016-02-01
Parametric quantum search algorithm (PQSA) is a form of quantum search that results by relaxing the unitarity of the original algorithm. PQSA can naturally be cast in the form of quantum walk, by means of the formalism of oracle algebra. This is due to the fact that the completely positive trace preserving search map used by PQSA, admits a unitarization (unitary dilation) a la quantum walk, at the expense of introducing auxiliary quantum coin-qubit space. The ensuing QW describes a process of spiral motion, chosen to be driven by two unitary Kraus generators, generating planar rotations of Bloch vector around an axis. The quadratic acceleration of quantum search translates into an equivalent quadratic saving of the number of coin qubits in the QW analogue. The associated to QW model Hamiltonian operator is obtained and is shown to represent a multi-particle long-range interacting quantum system that simulates parametric search. Finally, the relation of PQSA-QW simulator to the QW search algorithm is elucidated.
Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images
NASA Astrophysics Data System (ADS)
Yao, Shoukui; Qin, Xiaojuan
2018-02-01
Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
A practical approach to implementing new CDC GBS guidelines.
Hill, Shawna M; Bridges, Margie A; Knudsen, Alexis L; Vezeau, Toni M
2013-01-01
Group beta streptococcus (GBS) is a well-documented pathogen causing serious maternal and fetal morbidity and mortality. The CDC guidelines for managing clients who test positive for GBS in pregnancy were revised and published in 2010. However, CDC and extant literature provide limited guidance on implementation strategies for these new recommendations. Although several algorithms are included in the CDC (2010) document, none combine the maternal risk factors for practical and consistent implementation from pregnancy to newborn. In response to confusion upon initial education of these guidelines, we developed an algorithm for maternal intrapartum management. In addition, we clarified the CDC (2010) newborn algorithm in response to provider request. Without altering the recommendations, both algorithms provide clarification of the CDC (2010) guidelines. The nursing process provides an organizational structure for the discussion of our efforts to translate the complex guidelines into practice. This article could provide other facilities with tools for dealing with specific aspects of the complex clinical management of perinatal GBS.
Recognition of Protein-coding Genes Based on Z-curve Algorithms
-Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling
2014-01-01
Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027
Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.
Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena
2013-06-01
Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.
Zmiri, Dror; Shahar, Yuval; Taieb-Maimon, Meirav
2012-04-01
To test the feasibility of classifying emergency department patients into severity grades using data mining methods. Emergency department records of 402 patients were classified into five severity grades by two expert physicians. The Naïve Bayes and C4.5 algorithms were applied to produce classifiers from patient data into severity grades. The classifiers' results over several subsets of the data were compared with the physicians' assessments, with a random classifier, and with a classifier that selects the maximal-prevalence class. Positive predictive value, multiple-class extensions of sensitivity and specificity combinations, and entropy change. The mean accuracy of the data mining classifiers was 52.94 ± 5.89%, significantly better (P < 0.05) than the mean accuracy of a random classifier (34.60 ± 2.40%). The entropy of the input data sets was reduced through classification by a mean of 10.1%. Allowing for classification deviations of one severity grade led to mean accuracy of 85.42 ± 1.42%. The classifiers' accuracy in that case was similar to the physicians' consensus rate. Learning from consensus records led to better performance. Reducing the number of severity grades improved results in certain cases. The performance of the Naïve Bayes and C4.5 algorithms was similar; in unbalanced data sets, Naïve Bayes performed better. It is possible to produce a computerized classification model for the severity grade of triage patients, using data mining methods. Learning from patient records regarding which there is a consensus of several physicians is preferable to learning from each physician's patients. Either Naïve Bayes or C4.5 can be used; Naïve Bayes is preferable for unbalanced data sets. An ambiguity in the intermediate severity grades seems to hamper both the physicians' agreement and the classifiers' accuracy. © 2010 Blackwell Publishing Ltd.
The Role of Na+ and K+ Transporters in Salt Stress Adaptation in Glycophytes
Assaha, Dekoum V. M.; Ueda, Akihiro; Saneoka, Hirofumi; Al-Yahyai, Rashid; Yaish, Mahmoud W.
2017-01-01
Ionic stress is one of the most important components of salinity and is brought about by excess Na+ accumulation, especially in the aerial parts of plants. Since Na+ interferes with K+ homeostasis, and especially given its involvement in numerous metabolic processes, maintaining a balanced cytosolic Na+/K+ ratio has become a key salinity tolerance mechanism. Achieving this homeostatic balance requires the activity of Na+ and K+ transporters and/or channels. The mechanism of Na+ and K+ uptake and translocation in glycophytes and halophytes is essentially the same, but glycophytes are more susceptible to ionic stress than halophytes. The transport mechanisms involve Na+ and/or K+ transporters and channels as well as non-selective cation channels. Thus, the question arises of whether the difference in salt tolerance between glycophytes and halophytes could be the result of differences in the proteins or in the expression of genes coding the transporters. The aim of this review is to seek answers to this question by examining the role of major Na+ and K+ transporters and channels in Na+ and K+ uptake, translocation and intracellular homeostasis in glycophytes. It turns out that these transporters and channels are equally important for the adaptation of glycophytes as they are for halophytes, but differential gene expression, structural differences in the proteins (single nucleotide substitutions, impacting affinity) and post-translational modifications (phosphorylation) account for the differences in their activity and hence the differences in tolerance between the two groups. Furthermore, lack of the ability to maintain stable plasma membrane (PM) potentials following Na+-induced depolarization is also crucial for salt stress tolerance. This stable membrane potential is sustained by the activity of Na+/H+ antiporters such as SOS1 at the PM. Moreover, novel regulators of Na+ and K+ transport pathways including the Nax1 and Nax2 loci regulation of SOS1 expression and activity in the stele, and haem oxygenase involvement in stabilizing membrane potential by activating H+-ATPase activity, favorable for K+ uptake through HAK/AKT1, have been shown and are discussed. PMID:28769821
Yamada, Yuko; Matsugi, Jitsuhiro; Ishikura, Hisayuki
2003-04-15
The tRNA1Ser (anticodon VGA, V=uridin-5-oxyacetic acid) is essential for translation of the UCA codon in Escherichia coli. Here, we studied the translational abilities of serine tRNA derivatives, which have different bases from wild type at the first positions of their anticodons, using synthetic mRNAs containing the UCN (N=A, G, C, or U) codon. The tRNA1Ser(G34) having the anticodon GGA was able to read not only UCC and UCU codons but also UCA and UCG codons. This means that the formation of G-A or G-G pair allowed at the wobble position and these base pairs are noncanonical. The translational efficiency of the tRNA1Ser(G34) for UCA or UCG codon depends on the 2'-O-methylation of the C32 (Cm). The 2'-O-methylation of C32 may give rise to the space necessary for G-A or G-G base pair formation between the first position of anticodon and the third position of codon.
Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.
Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich
2017-08-01
Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.
Alaska SAR Facility (ASF5) SAR Communications (SARCOM) Data Compression System
NASA Technical Reports Server (NTRS)
Mango, Stephen A.
1989-01-01
The real-time operational requirements for SARCOM translation into a high speed image data handler and processor to achieve the desired compression ratios and the selection of a suitable image data compression technique with as low as possible fidelity (information) losses and which can be implemented in an algorithm placing a relatively low arithmetic load on the system are described.
Action-based verification of RTCP-nets with CADP
NASA Astrophysics Data System (ADS)
Biernacki, Jerzy; Biernacka, Agnieszka; Szpyrka, Marcin
2015-12-01
The paper presents an RTCP-nets' (real-time coloured Petri nets) coverability graphs into Aldebaran format translation algorithm. The approach provides the possibility of automatic RTCP-nets verification using model checking techniques provided by the CADP toolbox. An actual fire alarm control panel system has been modelled and several of its crucial properties have been verified to demonstrate the usability of the approach.
Duffy, Ellen B.; Barquera, Blanca
2006-01-01
The membrane topologies of the six subunits of Na+-translocating NADH:quinone oxidoreductase (Na+-NQR) from Vibrio cholerae were determined by a combination of topology prediction algorithms and the construction of C-terminal fusions. Fusion expression vectors contained either bacterial alkaline phosphatase (phoA) or green fluorescent protein (gfp) genes as reporters of periplasmic and cytoplasmic localization, respectively. A majority of the topology prediction algorithms did not predict any transmembrane helices for NqrA. A lack of PhoA activity when fused to the C terminus of NqrA and the observed fluorescence of the green fluorescent protein C-terminal fusion confirm that this subunit is localized to the cytoplasmic side of the membrane. Analysis of four PhoA fusions for NqrB indicates that this subunit has nine transmembrane helices and that residue T236, the binding site for flavin mononucleotide (FMN), resides in the cytoplasm. Three fusions confirm that the topology of NqrC consists of two transmembrane helices with the FMN binding site at residue T225 on the cytoplasmic side. Fusion analysis of NqrD and NqrE showed almost mirror image topologies, each consisting of six transmembrane helices; the results for NqrD and NqrE are consistent with the topologies of Escherichia coli homologs YdgQ and YdgL, respectively. The NADH, flavin adenine dinucleotide, and Fe-S center binding sites of NqrF were localized to the cytoplasm. The determination of the topologies of the subunits of Na+-NQR provides valuable insights into the location of cofactors and identifies targets for mutagenesis to characterize this enzyme in more detail. The finding that all the redox cofactors are localized to the cytoplasmic side of the membrane is discussed. PMID:17041063
Xu, Jing; Wang, Zhongbin; Tan, Chao; Liu, Xinhua
2018-01-01
As a sound signal has the advantages of non-contacted measurement, compact structure, and low power consumption, it has resulted in much attention in many fields. In this paper, the sound signal of the coal mining shearer is analyzed to realize the accurate online cutting pattern identification and guarantee the safety quality of the working face. The original acoustic signal is first collected through an industrial microphone and decomposed by adaptive ensemble empirical mode decomposition (EEMD). A 13-dimensional set composed by the normalized energy of each level is extracted as the feature vector in the next step. Then, a swarm intelligence optimization algorithm inspired by bat foraging behavior is applied to determine key parameters of the traditional variable translation wavelet neural network (VTWNN). Moreover, a disturbance coefficient is introduced into the basic bat algorithm (BA) to overcome the disadvantage of easily falling into local extremum and limited exploration ability. The VTWNN optimized by the modified BA (VTWNN-MBA) is used as the cutting pattern recognizer. Finally, a simulation example, with an accuracy of 95.25%, and a series of comparisons are conducted to prove the effectiveness and superiority of the proposed method. PMID:29382120
PLUS: open-source toolkit for ultrasound-guided intervention systems.
Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor
2014-10-01
A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.
Music 4C, a multi-voiced synthesis program with instruments defined in C
NASA Astrophysics Data System (ADS)
Beauchamp, James W.
2003-04-01
Music 4C is a program which runs under Unix (including Linux) and provides a means for the synthesis of arbitrary signals as defined by the C code. The program is actually a loose translation of an earlier program, Music 4BF [H. S. Howe, Jr., Electronic Music Synthesis (Norton, 1975)]. A set of instrument definitions are driven by a numerical score which consists of a series of ``events.'' Each event gives an instrument name, start time and duration, and a number of parameters (e.g., pitch) which describe the event. Each instrument definition consists of event parameters, performance variables, initializations, and a synthesis algorithmic code. Thus, the synthetic signal, no matter how complex, is precisely defined. Moreover, the resulting sounds can be overlaid in any arbitrary pattern. The program serves as a mixer of algorithmically produced sounds or recorded sounds taken from sample files or synthesized from spectrum files. A score file can be entered by hand, generated from a program, translated from a MIDI file, or generated from an alpha-numeric score using an auxiliary program, Notepro. Output sample files are in wav, snd, or aiff format. The program is provided in the C source code for download.
Non-rigid Motion Correction in 3D Using Autofocusing with Localized Linear Translations
Cheng, Joseph Y.; Alley, Marcus T.; Cunningham, Charles H.; Vasanawala, Shreyas S.; Pauly, John M.; Lustig, Michael
2012-01-01
MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from non-rigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well-approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric – more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multi-channel navigator data. The novel navigation strategy is based on the so-called “Butterfly” navigators which are modifications to the spin-warp sequence that provide intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, non-rigid motion was observed. PMID:22307933
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
Spencer, Jean L; Bhatia, Vivek N; Whelan, Stephen A; Costello, Catherine E; McComb, Mark E
2013-12-01
The identification of protein post-translational modifications (PTMs) is an increasingly important component of proteomics and biomarker discovery, but very few tools exist for performing fast and easy characterization of global PTM changes and differential comparison of PTMs across groups of data obtained from liquid chromatography-tandem mass spectrometry experiments. STRAP PTM (Software Tool for Rapid Annotation of Proteins: Post-Translational Modification edition) is a program that was developed to facilitate the characterization of PTMs using spectral counting and a novel scoring algorithm to accelerate the identification of differential PTMs from complex data sets. The software facilitates multi-sample comparison by collating, scoring, and ranking PTMs and by summarizing data visually. The freely available software (beta release) installs on a PC and processes data in protXML format obtained from files parsed through the Trans-Proteomic Pipeline. The easy-to-use interface allows examination of results at protein, peptide, and PTM levels, and the overall design offers tremendous flexibility that provides proteomics insight beyond simple assignment and counting.
Intensity statistics in the presence of translational noncrystallographic symmetry.
Read, Randy J; Adams, Paul D; McCoy, Airlie J
2013-02-01
In the case of translational noncrystallographic symmetry (tNCS), two or more copies of a component in the asymmetric unit of the crystal are present in a similar orientation. This causes systematic modulations of the reflection intensities in the diffraction pattern, leading to problems with structure determination and refinement methods that assume, either implicitly or explicitly, that the distribution of intensities is a function only of resolution. To characterize the statistical effects of tNCS accurately, it is necessary to determine the translation relating the copies, any small rotational differences in their orientations, and the size of random coordinate differences caused by conformational differences. An algorithm to estimate these parameters and refine their values against a likelihood function is presented, and it is shown that by accounting for the statistical effects of tNCS it is possible to unmask the competing statistical effects of twinning and tNCS and to more robustly assess the crystal for the presence of twinning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, M.; Al-Dayeh, L.; Patel, P.
It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less
KEGGtranslator: visualizing and converting the KEGG PATHWAY database to various formats.
Wrzodek, Clemens; Dräger, Andreas; Zell, Andreas
2011-08-15
The KEGG PATHWAY database provides a widely used service for metabolic and nonmetabolic pathways. It contains manually drawn pathway maps with information about the genes, reactions and relations contained therein. To store these pathways, KEGG uses KGML, a proprietary XML-format. Parsers and translators are needed to process the pathway maps for usage in other applications and algorithms. We have developed KEGGtranslator, an easy-to-use stand-alone application that can visualize and convert KGML formatted XML-files into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g. MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator is freely available as a Java(™) Web Start application and for download at http://www.cogsys.cs.uni-tuebingen.de/software/KEGGtranslator/. KGML files can be downloaded from within the application. clemens.wrzodek@uni-tuebingen.de Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claus, Rene A.; Wang, Yow-Gwo; Wojdyla, Antoine
Extreme Ultraviolet (EUV) Lithography mask defects were examined on the actinic mask imaging system, SHARP, at Lawrence Berkeley National Laboratory. Also, a quantitative phase retrieval algorithm based on the Weak Object Transfer Function was applied to the measured through-focus aerial images to examine the amplitude and phase of the defects. The accuracy of the algorithm was demonstrated by comparing the results of measurements using a phase contrast zone plate and a standard zone plate. Using partially coherent illumination to measure frequencies that would otherwise fall outside the numerical aperture (NA), it was shown that some defects are smaller than themore » conventional resolution of the microscope. We found that the programmed defects of various sizes were measured and shown to have both an amplitude and a phase component that the algorithm is able to recover.« less
Walrafen, George E; Douglas, Rudolph T W
2006-03-21
High-temperature, high-pressure Raman spectra were obtained from aqueous NaOH solutions up to 2NaOHH2O, with X(NaOH)=0.667 at 480 K. The spectra corresponding to the highest compositions, X(NaOH)> or =0.5, are dominated by H3O2-. An IR xi-function dispersion curve for aqueous NaOH, at 473 K and 1 kbar, calculated from the data of Franck and Charuel indicates that the OH- ion forms H3O2- by preferential H bonding with nonhydrogen-bonded OH groups. Raman spectra from wet to anhydrous, solid LiOH, NaOH, and KOH yield sharp, symmetric OH- stretching peaks at 3664, 3633, and 3596 cm(-1), respectively, plus water-related, i.e., H3O2-, peaks near LiOH, 3562 cm(-1), NaOH, 3596 cm(-1), and, KOH, 3500 cm(-1). Absence of H3O2- peaks from the solid assures that the corresponding melt is anhydrous. Raman spectra from the anhydrous melts yield OH- stretching peak frequencies: LiOH, 3614+/-4 cm(-1), 873 K; NaOH, 3610+/-2 cm(-1), 975 K; and, KOH, 3607+/-2 cm(-1), 773 K, but low-frequency asymmetry due to ion-pair interactions is present which is centered near 3550 cm(-1). The ion-pair-related asymmetry corresponds to the sole IR maximum near 3550 cm(-1) from anhydrous molten NaOH, at 623 K. Bose-Einstein correction of published low-frequency Raman data from molten LiOH revealed an acoustic phonon, near 205 cm(-1), related to restricted translation of OH- versus Li+, and an optical phonon, at 625 cm(-1) and tau approximately 0.05 ps, due to protonic precession and/or pendular motion. Strong H bonding between water and the O atom of OH- forms H3O2-, but the proton of OH- does not bond with H significantly. Large Raman bandwidths (aqueous solutions) are explained in terms of inhomogeneous broadening due to proton transfer in a double well. Vibrational assignments are presented for H3O2-.
Kagome fiber based ultrafast laser microsurgery probe delivering micro-Joule pulse energies.
Subramanian, Kaushik; Gabay, Ilan; Ferhanoğlu, Onur; Shadfan, Adam; Pawlowski, Michal; Wang, Ye; Tkaczyk, Tomasz; Ben-Yakar, Adela
2016-11-01
We present the development of a 5 mm, piezo-actuated, ultrafast laser scalpel for fast tissue microsurgery. Delivery of micro-Joules level energies to the tissue was made possible by a large, 31 μm, air-cored inhibited-coupling Kagome fiber. We overcome the fiber's low NA by using lenses made of high refractive index ZnS, which produced an optimal focusing condition with 0.23 NA objective. The optical design achieved a focused laser spot size of 4.5 μm diameter covering a 75 × 75 μm 2 scan area in a miniaturized setting. The probe could deliver the maximum available laser power, achieving an average fluence of 7.8 J/cm 2 on the tissue surface at 62% transmission efficiency. Such fluences could produce uninterrupted, 40 μm deep cuts at translational speeds of up to 5 mm/s along the tissue. We predicted that the best combination of speed and coverage exists at 8 mm/s for our conditions. The onset of nonlinear absorption in ZnS, however, limited the probe's energy delivery capabilities to 1.4 μJ for linear operation at 1.5 picosecond pulse-widths of our fiber laser. Alternatives like broadband CaF 2 crystals should mitigate such nonlinear limiting behavior. Improved opto-mechanical design and appropriate material selection should allow substantially higher fluence delivery and propel such Kagome fiber-based scalpels towards clinical translation.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Padilla, Dionis A.; Fausto, Janette C.; Vergara, Ernesto M.; Garcia, Ramon G.; Delos Angeles, Bethsedea Joy S.; Dizon, Neil John A.; Mardo, Mark Kevin N.
2017-02-01
This research is about translating series of hand gesture to form a word and produce its equivalent sound on how it is read and said in Filipino accent using Support Vector Machine and Mel Frequency Cepstral Coefficient analysis. The concept is to detect Filipino speech input and translate the spoken words to their text form in Filipino. This study is trying to help the Filipino deaf community to impart their thoughts through the use of hand gestures and be able to communicate to people who do not know how to read hand gestures. This also helps literate deaf to simply read the spoken words relayed to them using the Filipino speech to text system.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Logical NAND and NOR Operations Using Algorithmic Self-assembly of DNA Molecules
NASA Astrophysics Data System (ADS)
Wang, Yanfeng; Cui, Guangzhao; Zhang, Xuncai; Zheng, Yan
DNA self-assembly is the most advanced and versatile system that has been experimentally demonstrated for programmable construction of patterned systems on the molecular scale. It has been demonstrated that the simple binary arithmetic and logical operations can be computed by the process of self assembly of DNA tiles. Here we report a one-dimensional algorithmic self-assembly of DNA triple-crossover molecules that can be used to execute five steps of a logical NAND and NOR operations on a string of binary bits. To achieve this, abstract tiles were translated into DNA tiles based on triple-crossover motifs. Serving as input for the computation, long single stranded DNA molecules were used to nucleate growth of tiles into algorithmic crystals. Our method shows that engineered DNA self-assembly can be treated as a bottom-up design techniques, and can be capable of designing DNA computer organization and architecture.
Simulation results for a finite element-based cumulative reconstructor
NASA Astrophysics Data System (ADS)
Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny
2017-10-01
Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.
On dealing with multiple correlation peaks in PIV
NASA Astrophysics Data System (ADS)
Masullo, A.; Theunissen, R.
2018-05-01
A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.
NASA Astrophysics Data System (ADS)
Balasubramanian, Priya S.; Guo, Jiaqi; Yao, Xinwen; Qu, Dovina; Lu, Helen H.; Hendon, Christine P.
2017-02-01
The directionality of collagen fibers across the anterior cruciate ligament (ACL) as well as the insertion of this key ligament into bone are important for understanding the mechanical integrity and functionality of this complex tissue. Quantitative analysis of three-dimensional fiber directionality is of particular interest due to the physiological, mechanical, and biological heterogeneity inherent across the ACL-to-bone junction, the behavior of the ligament under mechanical stress, and the usefulness of this information in designing tissue engineered grafts. We have developed an algorithm to characterize Optical Coherence Tomography (OCT) image volumes of the ACL. We present an automated algorithm for measuring ligamentous fiber angles, and extracting attenuation and backscattering coefficients of ligament, interface, and bone regions within mature and immature bovine ACL insertion samples. Future directions include translating this algorithm for real time processing to allow three-dimensional volumetric analysis within dynamically moving samples.
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Luquette, Richard J.; Sanner, Robert M.
2003-01-01
Precision Formation Flying is an enabling technology for a variety of proposed space-based observatories, including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), the associated MAXIM pathfinder mission, and the Stellar Imager. An essential element of the technology is the control algorithm. This paper discusses the development of a nonlinear, six-degree of freedom (6DOF) control algorithm for maintaining the relative position and attitude of a spacecraft within a formation. The translation dynamics are based on the equations of motion for the restricted three body problem. The control law guarantees the tracking error convergences to zero, based on a Lyapunov analysis. The simulation, modelled after the MAXIM Pathfinder mission, maintains the relative position and attitude of a Follower spacecraft with respect to a Leader spacecraft, stationed near the L2 libration point in the Sun-Earth system.
The elimination of colour blocks in remote sensing images in VR
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Li, Guohui; Su, Zhenyu
2018-02-01
Aiming at the characteristics in HSI colour space of remote sensing images at different time in VR, a unified colour algorithm is proposed. First the method converted original image from RGB colour space to HSI colour space. Then, based on the invariance of the hue before and after the colour adjustment in the HSI colour space and the brightness translational features of the image after the colour adjustment, establish the linear model which satisfied these characteristics of the image. And then determine the range of the parameters in the model. Finally, according to the established colour adjustment model, the experimental verification is carried out. The experimental results show the proposed model can effectively recover the clear image, and the algorithm is faster. The experimental results show the proposed algorithm can effectively enhance the image clarity and can solve the pigment block problem well.
Discovering causal signaling pathways through gene-expression patterns
Parikh, Jignesh R.; Klinger, Bertram; Xia, Yu; Marto, Jarrod A.; Blüthgen, Nils
2010-01-01
High-throughput gene-expression studies result in lists of differentially expressed genes. Most current meta-analyses of these gene lists include searching for significant membership of the translated proteins in various signaling pathways. However, such membership enrichment algorithms do not provide insight into which pathways caused the genes to be differentially expressed in the first place. Here, we present an intuitive approach for discovering upstream signaling pathways responsible for regulating these differentially expressed genes. We identify consistently regulated signature genes specific for signal transduction pathways from a panel of single-pathway perturbation experiments. An algorithm that detects overrepresentation of these signature genes in a gene group of interest is used to infer the signaling pathway responsible for regulation. We expose our novel resource and algorithm through a web server called SPEED: Signaling Pathway Enrichment using Experimental Data sets. SPEED can be freely accessed at http://speed.sys-bio.net/. PMID:20494976
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
Trajectory generation for an on-road autonomous vehicle
NASA Astrophysics Data System (ADS)
Horst, John; Barbera, Anthony
2006-05-01
We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.
NASA Technical Reports Server (NTRS)
Crouch, P. E.; Grossman, Robert
1992-01-01
This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
NASA Astrophysics Data System (ADS)
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
Global Network Alignment in the Context of Aging.
Faisal, Fazle Elahi; Zhao, Han; Milenkovic, Tijana
2015-01-01
Analogous to sequence alignment, network alignment (NA) can be used to transfer biological knowledge across species between conserved network regions. NA faces two algorithmic challenges: 1) Which cost function to use to capture "similarities" between nodes in different networks? 2) Which alignment strategy to use to rapidly identify "high-scoring" alignments from all possible alignments? We "break down" existing state-of-the-art methods that use both different cost functions and different alignment strategies to evaluate each combination of their cost functions and alignment strategies. We find that a combination of the cost function of one method and the alignment strategy of another method beats the existing methods. Hence, we propose this combination as a novel superior NA method. Then, since human aging is hard to study experimentally due to long lifespan, we use NA to transfer aging-related knowledge from well annotated model species to poorly annotated human. By doing so, we produce novel human aging-related knowledge, which complements currently available knowledge about aging that has been obtained mainly by sequence alignment. We demonstrate significant similarity between topological and functional properties of our novel predictions and those of known aging-related genes. We are the first to use NA to learn more about aging.
High-NA metrology and sensing on Berkeley MET5
NASA Astrophysics Data System (ADS)
Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick
2017-03-01
In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comstock, Jennifer M.; Protat, Alain; McFarlane, Sally A.
2013-05-22
Ground-based radar and lidar observations obtained at the Department of Energy’s Atmospheric Radiation Measurement Program’s Tropical Western Pacific site located in Darwin, Australia are used to retrieve ice cloud properties in anvil and cirrus clouds. Cloud microphysical properties derived from four different retrieval algorithms (two radar-lidar and two radar only algorithms) are compared by examining mean profiles and probability density functions of effective radius (Re), ice water content (IWC), extinction, ice number concentration, ice crystal fall speed, and vertical air velocity. Retrieval algorithm uncertainty is quantified using radiative flux closure exercises. The effect of uncertainty in retrieved quantities on themore » cloud radiative effect and radiative heating rates are presented. Our analysis shows that IWC compares well among algorithms, but Re shows significant discrepancies, which is attributed primarily to assumptions of particle shape. Uncertainty in Re and IWC translates into sometimes-large differences in cloud radiative effect (CRE) though the majority of cases have a CRE difference of roughly 10 W m-2 on average. These differences, which we believe are primarily driven by the uncertainty in Re, can cause up to 2 K/day difference in the radiative heating rates between algorithms.« less
Yokomichi, Tomonobu; Morimoto, Kyoko; Oshima, Nana; Yamada, Yuriko; Fu, Liwei; Taketani, Shigeru; Ando, Masayoshi; Kataoka, Takao
2011-01-01
Pro-inflammatory cytokines, such as tumor necrosis factor (TNF)-α, induce the expression of a wide variety of genes, including intercellular adhesion molecule-1 (ICAM-1). Ursolic acid (3β-hydroxy-urs-12-en-28-oic acid) was identified to inhibit the cell-surface ICAM-1 expression induced by pro-inflammatory cytokines in human lung carcinoma A549 cells. Ursolic acid was found to inhibit the TNF-α-induced ICAM-1 protein expression almost completely, whereas the TNF-α-induced ICAM-1 mRNA expression and NF-κB signaling pathway were decreased only partially by ursolic acid. In line with these findings, ursolic acid prevented cellular protein synthesis as well as amino acid uptake, but did not obviously affect nucleoside uptake and the subsequent DNA/RNA syntheses. This inhibitory profile of ursolic acid was similar to that of the Na+/K+-ATPase inhibitor, ouabain, but not the translation inhibitor, cycloheximide. Consistent with this notion, ursolic acid was found to inhibit the catalytic activity of Na+/K+-ATPase. Thus, our present study reveals a novel molecular mechanism in which ursolic acid inhibits Na+/K+-ATPase activity and prevents the TNF-α-induced gene expression by blocking amino acid transport and cellular protein synthesis. PMID:24970122
Ohura, Takehiko; Sanada, Hiromi; Mino, Yoshio
2004-01-01
In recent years, the concept of cost-effectiveness, including medical delivery and health service fee systems, has become widespread in Japanese health care. In the field of pressure ulcer management, the recent introduction of penalty subtraction in the care fee system emphasizes the need for prevention and cost-effective care of pressure ulcer. Previous cost-effectiveness research on pressure ulcer management tended to focus only on "hardware" costs such as those for pharmaceuticals and medical supplies, while neglecting other cost aspects, particularly those involving the cost of labor. Thus, cost-effectiveness in pressure ulcer care has not yet been fully established. To provide true cost effectiveness data, a comparative prospective study was initiated in patients with stage II and III pressure ulcers. Considering the potential impact of the pressure reduction mattress on clinical outcome, in particular, the same type of pressure reduction mattresses are utilized in all the cases in the study. The cost analysis method used was Activity-Based Costing, which measures material and labor cost aspects on a daily basis. A reduction in the Pressure Sore Status Tool (PSST) score was used to measure clinical effectiveness. Patients were divided into three groups based on the treatment method and on the use of a consistent algorithm of wound care: 1. MC/A group, modern dressings with a treatment algorithm (control cohort). 2. TC/A group, traditional care (ointment and gauze) with a treatment algorithm. 3. TC/NA group, traditional care (ointment and gauze) without a treatment algorithm. The results revealed that MC/A is more cost-effective than both TC/A and TC/NA. This suggests that appropriate utilization of modern dressing materials and a pressure ulcer care algorithm would contribute to reducing health care costs, improved clinical results, and, ultimately, greater cost-effectiveness.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
A reductionist approach to the analysis of learning in brain-computer interfaces.
Danziger, Zachary
2014-04-01
The complexity and scale of brain-computer interface (BCI) studies limit our ability to investigate how humans learn to use BCI systems. It also limits our capacity to develop adaptive algorithms needed to assist users with their control. Adaptive algorithm development is forced offline and typically uses static data sets. But this is a poor substitute for the online, dynamic environment where algorithms are ultimately deployed and interact with an adapting user. This work evaluates a paradigm that simulates the control problem faced by human subjects when controlling a BCI, but which avoids the many complications associated with full-scale BCI studies. Biological learners can be studied in a reductionist way as they solve BCI-like control problems, and machine learning algorithms can be developed and tested in closed loop with the subjects before being translated to full BCIs. The method is to map 19 joint angles of the hand (representing neural signals) to the position of a 2D cursor which must be piloted to displayed targets (a typical BCI task). An investigation is presented on how closely the joint angle method emulates BCI systems; a novel learning algorithm is evaluated, and a performance difference between genders is discussed.
NASA Astrophysics Data System (ADS)
Luo, Yanting; Zhang, Yongjun; Gu, Wanyi
2009-11-01
In large dynamic networks it is extremely difficult to maintain accurate routing information on all network nodes. The existing studies have illustrated the impact of imprecise state information on the performance of dynamic routing and wavelength assignment (RWA) algorithms. An algorithm called Bypass Based Optical Routing (BBOR) proposed by Xavier Masip-Bruin et al can reduce the effects of having inaccurate routing information in networks operating under the wavelength-continuity constraint. Then they extended the BBOR mechanism (for convenience it's called EBBOR mechanism below) to be applied to the networks with sparse and limited wavelength conversion. But it only considers the characteristic of wavelength conversion in the step of computing the bypass-paths so that its performance may decline with increasing the degree of wavelength translation (this concept will be explained in the section of introduction again). We will demonstrate the issue through theoretical analysis and introduce a novel algorithm which modifies both the lightpath selection and the bypass-paths computation in comparison to EBBOR algorithm. Simulations show that the Modified EBBOR (MEBBOR) algorithm improves the blocking performance significantly in optical networks with Conversion Capability.
Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection
NASA Astrophysics Data System (ADS)
Brokish, Jeffrey; Sack, Paul; Bresler, Yoram
2010-04-01
In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.
NASA Astrophysics Data System (ADS)
Nguyen, D. T.; Bertholet, J.; Kim, J.-H.; O'Brien, R.; Booth, J. T.; Poulsen, P. R.; Keall, P. J.
2018-01-01
Increasing evidence suggests that intrafraction tumour motion monitoring needs to include both 3D translations and 3D rotations. Presently, methods to estimate the rotation motion require the 3D translation of the target to be known first. However, ideally, translation and rotation should be estimated concurrently. We present the first method to directly estimate six-degree-of-freedom (6DoF) motion from the target’s projection on a single rotating x-ray imager in real-time. This novel method is based on the linear correlations between the superior-inferior translations and the motion in the other five degrees-of-freedom. The accuracy of the method was evaluated in silico with 81 liver tumour motion traces from 19 patients with three implanted markers. The ground-truth motion was estimated using the current gold standard method where each marker’s 3D position was first estimated using a Gaussian probability method, and the 6DoF motion was then estimated from the 3D positions using an iterative method. The 3D position of each marker was projected onto a gantry-mounted imager with an imaging rate of 11 Hz. After an initial 110° gantry rotation (200 images), a correlation model between the superior-inferior translations and the five other DoFs was built using a least square method. The correlation model was then updated after each subsequent frame to estimate 6DoF motion in real-time. The proposed algorithm had an accuracy (±precision) of -0.03 ± 0.32 mm, -0.01 ± 0.13 mm and 0.03 ± 0.52 mm for translations in the left-right (LR), superior-inferior (SI) and anterior-posterior (AP) directions respectively; and, 0.07 ± 1.18°, 0.07 ± 1.00° and 0.06 ± 1.32° for rotations around the LR, SI and AP axes respectively on the dataset. The first method to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager was devised. The algorithm was evaluated using 81 motion traces from 19 liver patients and was found to have sub-mm and sub-degree accuracy.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
Park, S W; Bebakar, W M W; Hernandez, P G; Macura, S; Hersløv, M L; de la Rosa, R
2017-02-01
To compare the efficacy and safety of two titration algorithms for insulin degludec/insulin aspart (IDegAsp) administered once daily with metformin in participants with insulin-naïve Type 2 diabetes mellitus. This open-label, parallel-group, 26-week, multicentre, treat-to-target trial, randomly allocated participants (1:1) to two titration arms. The Simple algorithm titrated IDegAsp twice weekly based on a single pre-breakfast self-monitored plasma glucose (SMPG) measurement. The Stepwise algorithm titrated IDegAsp once weekly based on the lowest of three consecutive pre-breakfast SMPG measurements. In both groups, IDegAsp once daily was titrated to pre-breakfast plasma glucose values of 4.0-5.0 mmol/l. Primary endpoint was change from baseline in HbA 1c (%) after 26 weeks. Change in HbA 1c at Week 26 was IDegAsp Simple -14.6 mmol/mol (-1.3%) (to 52.4 mmol/mol; 6.9%) and IDegAsp Stepwise -11.9 mmol/mol (-1.1%) (to 54.7 mmol/mol; 7.2%). The estimated between-group treatment difference was -1.97 mmol/mol [95% confidence interval (CI) -4.1, 0.2] (-0.2%, 95% CI -0.4, 0.02), confirming the non-inferiority of IDegAsp Simple to IDegAsp Stepwise (non-inferiority limit of ≤ 0.4%). Mean reduction in fasting plasma glucose and 8-point SMPG profiles were similar between groups. Rates of confirmed hypoglycaemia were lower for IDegAsp Stepwise [2.1 per patient years of exposure (PYE)] vs. IDegAsp Simple (3.3 PYE) (estimated rate ratio IDegAsp Simple /IDegAsp Stepwise 1.8; 95% CI 1.1, 2.9). Nocturnal hypoglycaemia rates were similar between groups. No severe hypoglycaemic events were reported. In participants with insulin-naïve Type 2 diabetes mellitus, the IDegAsp Simple titration algorithm improved HbA 1c levels as effectively as a Stepwise titration algorithm. Hypoglycaemia rates were lower in the Stepwise arm. © 2016 The Authors. Diabetic Medicine published by John Wiley & Sons Ltd on behalf of Diabetes UK.
Preliminary Study of Turbulence for a Lobed Body in Hypersonic Flight
2013-02-22
physics. Modest improvements in numerical algorithms, particularly those for solving partial differential equations ( PDEs ), can now be fully...dramatically.[7] In slower speed flow fields, this energy is absorbed mostly in molecular translational and rotational modes, but for hypersonic...REFERENCES 1. Génin, F., Fryxell, B. and Menon, S., “Simulation of Detonation Propagation in Turbulent Gas- Solid Reactive Mixtures”, 41 st
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Clinical decision-making and secondary findings in systems medicine.
Fischer, T; Brothers, K B; Erdmann, P; Langanke, M
2016-05-21
Systems medicine is the name for an assemblage of scientific strategies and practices that include bioinformatics approaches to human biology (especially systems biology); "big data" statistical analysis; and medical informatics tools. Whereas personalized and precision medicine involve similar analytical methods applied to genomic and medical record data, systems medicine draws on these as well as other sources of data. Given this distinction, the clinical translation of systems medicine poses a number of important ethical and epistemological challenges for researchers working to generate systems medicine knowledge and clinicians working to apply it. This article focuses on three key challenges: First, we will discuss the conflicts in decision-making that can arise when healthcare providers committed to principles of experimental medicine or evidence-based medicine encounter individualized recommendations derived from computer algorithms. We will explore in particular whether controlled experiments, such as comparative effectiveness trials, should mediate the translation of systems medicine, or if instead individualized findings generated through "big data" approaches can be applied directly in clinical decision-making. Second, we will examine the case of the Riyadh Intensive Care Program Mortality Prediction Algorithm, pejoratively referred to as the "death computer," to demonstrate the ethical challenges that can arise when big-data-driven scoring systems are applied in clinical contexts. We argue that the uncritical use of predictive clinical algorithms, including those envisioned for systems medicine, challenge basic understandings of the doctor-patient relationship. Third, we will build on the recent discourse on secondary findings in genomics and imaging to draw attention to the important implications of secondary findings derived from the joint analysis of data from diverse sources, including data recorded by patients in an attempt to realize their "quantified self." This paper examines possible ethical challenges that are likely to be raised as systems medicine to be translated into clinical medicine. These include the epistemological challenges for clinical decision-making, the use of scoring systems optimized by big data techniques and the risk that incidental and secondary findings will significantly increase. While some ethical implications remain still hypothetical we should use the opportunity to prospectively identify challenges to avoid making foreseeable mistakes when systems medicine inevitably arrives in routine care.
NASA Astrophysics Data System (ADS)
Ervin, Katherine; Shipman, Steven
2017-06-01
While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less
Jin, Sang-Man; Oh, Seung-Hoon; Oh, Bae Jun; Suh, Sunghwan; Bae, Ji Cheol; Lee, Jung Hee; Lee, Myung-Shik; Lee, Moon-Kyu; Kim, Kwang-Won; Kim, Jae Hyeon
2014-01-01
While a few studies have demonstrated the benefit of PEGylation in islet transplantation, most have employed renal subcapsular models and none have performed direct comparisons of islet mass in intraportal islet transplantation using islet magnetic resonance imaging (MRI). In this study, our aim was to demonstrate the benefit of PEGylation in the early post-transplant period of intraportal islet transplantation with a novel algorithm for islet MRI. Islets were PEGylated after ferucarbotran labeling in a rat syngeneic intraportal islet transplantation model followed by comparisons of post-transplant glycemic levels in recipient rats infused with PEGylated (n = 12) and non-PEGylated (n = 13) islets. The total area of hypointense spots and the number of hypointense spots larger than 1.758 mm(2) of PEGylated and non-PEGylated islets were quantitatively compared. The total area of hypointense spots (P < 0.05) and the number of hypointense spots larger than 1.758 mm(2) (P < 0.05) were higher in the PEGylated islet group 7 and 14 days post translation (DPT). These results translated into better post-transplant outcomes in the PEGylated islet group 28 DPT. In validation experiments, MRI parameters obtained 1, 7, and 14 DPT predicted normoglycemia 4 wk post-transplantation. We directly demonstrated the benefit of islet PEGylation in protection against nonspecific islet destruction in the early post-transplant period of intraportal islet transplantation using a novel algorithm for islet MRI. This novel algorithm could serve as a useful tool to demonstrate such benefit in future clinical trials of islet transplantation using PEGylated islets.
WE-H-202-04: Advanced Medical Image Registration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, G.
Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less
Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces
NASA Astrophysics Data System (ADS)
Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena
2013-06-01
Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.
WE-H-202-03: Accounting for Large Geometric Changes During Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hugo, G.
2016-06-15
Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less
WE-H-202-02: Biomechanical Modeling of Anatomical Response Over the Course of Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, K.
2016-06-15
Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less
WE-H-202-01: Memorial Introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, N.
2016-06-15
Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less
MultiMiTar: a novel multi objective optimization based miRNA-target prediction method.
Mitra, Ramkrishna; Bandyopadhyay, Sanghamitra
2011-01-01
Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm.
[RadLex - German version: a radiological lexicon for indexing image and report information].
Marwede, D; Daumke, P; Marko, K; Lobsien, D; Schulz, S; Kahn, T
2009-01-01
Since 2003 the Radiological Society of North America (RSNA) has been developing a lexicon of standardized radiological terms (RadLex) intended to support the structured reporting of imaging observations and the indexing of teaching cases. The aim of this study was to translate the first version of the lexicon (1 - 2007) into German and to implement a language-independent online term browser. RadLex version 1 - 2007 contains 6303 terms in nine main categories. Two radiologists independently translated the lexicon using medical dictionaries. Terms translated differently were revised and translated by consensus. For the development of an online term browser, a text processing algorithm called morphosemantic indexing was used which splits up words into small semantic units and compares those units to language-specific subword thesauri. In total 6240 of 6303 terms (99 %) were translated. Of those terms 3965 were German, 1893 were Latin, 359 were multilingual, and 23 were English terms that are also used in German and were therefore maintained. The online term browser supports a language-independent term search in RadLex (German/English) and other common medical terminology (e. g., ICD 10). The term browser displays term hierarchies and translations in different frames and the complexity of the result lists can be adapted by the user. RadLex version 1 - 2007 developed by the RSNA is now available in German and can be accessed online through a term browser with an efficient search function. This is an important precondition for the future comparison of national and international indexed radiological examination results and the interoperability between digital teaching resources.
A stable compound of helium and sodium at high pressure
Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.; ...
2017-02-06
Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. As a result, we also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less
A stable compound of helium and sodium at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.
Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. We also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less
A stable compound of helium and sodium at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xiao; Oganov, Artem R.; Goncharov, Alexander F.
Helium is generally understood to be chemically inert and this is due to its extremely stable closed-shell electronic configuration, zero electron affinity and an unsurpassed ionization potential. It is not known to form thermodynamically stable compounds, except a few inclusion compounds. Here, using the ab initio evolutionary algorithm USPEX and subsequent high-pressure synthesis in a diamond anvil cell, we report the discovery of a thermodynamically stable compound of helium and sodium, Na 2He, which has a fluorite-type structure and is stable at pressures >113 GPa. We show that the presence of He atoms causes strong electron localization and makes thismore » material insulating. This phase is an electride, with electron pairs localized in interstices, forming eight-centre two-electron bonds within empty Na 8 cubes. As a result, we also predict the existence of Na 2HeO with a similar structure at pressures above 15 GPa.« less
Bottagisio, Marta; Lovati, Arianna B; Lopa, Silvia; Moretti, Matteo
2015-08-01
Bone defects are severe burdens in clinics, and thus cell therapy offers an alternative strategy exploiting the features of bone marrow stromal cells (BMSCs). Sheep are a suitable orthopedic preclinical model for similarities with humans. This study compares the influence of two phosphate sources combined with bone morphogenetic protein-2 (BMP-2) on the osteogenic potential of human and ovine BMSCs. β-Glycerophosphate (β-GlyP) and monosodium phosphate (NaH2PO4) were used as organic and inorganic phosphate sources. Osteogenic differentiation of the BMSCs was assessed by calcified matrix, alkaline phosphatase (ALP) activity, and gene expression analysis. A higher calcified matrix deposition was detected in BMSCs cultured with NaH2PO4. Although no significant differences were detected among media for human BMSCs, β-GlyP with or without BMP-2 determined a positive trend in ALP levels compared to NaH2PO4. In contrast, NaH2PO4 had a positive effect on ALP levels in ovine BMSCs. β-GlyP better supported the expression of COL1A1 in human BMSCs, whereas all media enhanced RUNX2 and SPARC expression. Ovine BMSCs responded poorly to any media for RUNX2, COL1A1, and SPARC expression. NaH2PO4 improved calcified matrix deposition without upregulating the transcriptional expression of osteogenic markers. A further optimization of differentiation protocols needs to be performed to translate the procedures from preclinical to clinical models.
Topology and Function of Human P-Glycoprotein in Multidrug Resistant Breast Cancer Cells.
1995-09-01
membrane orientation and insertion process co-translationally. For the C-terminal half of Pgp, little is known about the regulatory mechanisms of...solution (in mM: 250 sucrose, 10 Tris-HC1, pH 7.5, 150 NaCl) for further processing . For experiments requiring protease digestion and endoglycosidase...steps), 40 ms after the start of the voltage pulse . Bath and pipette solution compositions were as follows (in mM): NMDG-C1 pipette (280 mosmol/kg
2016-08-08
structure for RC-2117, the Liko Nā Pilina project. Table 2. List of sites where plant traits were collected. Table 3. Master list of species with... structure and ecosystem services. The Hawaiian name, Liko Nā Pilina, translates to growing/budding new relationships, and reflects the species...carbon (C) storage and minimize C turnover, provide the most benefits for native plant biodiversity, and allow for open understory structure with high
Physiological Research on the Centrifuge in Flight Medical Examinations and Selection System
1988-11-09
and veins) when the pressure is-sn them. in vascular tension regulation under ac it:r C , L .. L Ifl .,:echanisms and the renin angiotensin syste...AND SELECTION SYSTEM by P.M. Suvorov DTIC x~ f ELE T E0 Approved for public release; Distribution unlimited. !P F D- ID(RS)T-0892-88 HUMAN TRANSLATION...SELECTION SYSTEM By: P.M. Suvorov English pages: 39 Source: Fiziologicheskiye Issledovaniya na Tsentrifuge v Praktike Vrachebno-Letnoy Ekspertizy i Sisteme
Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
2018-05-01
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
Desiderata for computable representations of electronic health records-driven phenotype algorithms
Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A
2015-01-01
Background Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). Methods A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. Results We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. Conclusion A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. PMID:26342218
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
A novel artificial immune clonal selection classification and rule mining with swarm learning model
NASA Astrophysics Data System (ADS)
Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.
2013-06-01
Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.
In vivo self-gated 23 Na MRI at 7 T using an oval-shaped body resonator.
Platt, Tanja; Umathum, Reiner; Fiedler, Thomas M; Nagel, Armin M; Bitz, Andreas K; Maier, Florian; Bachert, Peter; Ladd, Mark E; Wielpütz, Mark O; Kauczor, Hans-Ulrich; Behl, Nicolas G R
2018-02-09
This work faces three challenges of sodium ( 23 Na) torso MRI on the way to quantitative 23 Na MRI: Development of a 23 Na radiofrequency transmit and receive coil covering a large part of the human body in width and length for 23 Na MRI at 7 T; reduction of blurring due to respiration in free-breathing 23 Na MRI using a self-gating approach; and reduction of image noise using a compressed-sensing reconstruction. An oval-shaped birdcage resonator with a large field of view of (400 mm) 3 and a homogeneous transmit and receive field distribution was designed, simulated, and implemented on a 7T MR system. In free-breathing 3-dimensional radial 23 Na MRI (acquisition time ≈ 30 minutes), retrospective respiratory self-gating was applied, which sorts the acquired projections into two respiratory states based on the intrinsic respiration-dependent signal changes. Furthermore, a 3-dimensional dictionary-learning compressed-sensing reconstruction was applied. The developed body coil provided homogeneous radiofrequency excitation (flip angle error of 4.9% in central region of interest of 23 × 13 × 10 cm 3 ) and homogeneous signal reception. The self-gating approach allowed for separation of the full data set into two subsets associated with different respiratory states (inhaled and exhaled), and thereby reduced blurring due to respiration in the separated images. Image noise was markedly reduced by the compressed-sensing algorithm. The presented body coil enables full body width 23 Na MRI with long z-axis coverage at 7 T for the first time. Additionally, the retrospective respiratory self-gating performance is demonstrated for free-breathing lung and abdominal 23 Na MRI in 3 subjects. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Langowski, M. P.; von Savigny, C.; Burrows, J. P.; Rozanov, V. V.; Dunker, T.; Hoppe, U.-P.; Sinnhuber, M.; Aikin, A. C.
2015-07-01
An algorithm has been developed for the retrieval of sodium atom (Na) number density on a latitude and altitude grid from SCIAMACHY limb measurements of the Na resonance fluorescence. The results are obtained between 50 and 150 km altitude and the resulting global seasonal variations of Na are analysed. The retrieval approach is adapted from that used for the retrieval of magnesium atom (Mg) and magnesium ion (Mg+) number density profiles recently reported by Langowski et al. (2014). Monthly mean values of Na are presented as a function of altitude and latitude. This data set was retrieved from the 4 years of spectroscopic limb data of the SCIAMACHY mesosphere and lower thermosphere (MLT) measurement mode. The Na layer has a nearly constant altitude of 90-93 km for all latitudes and seasons, and has a full width at half maximum of 5-15 km. Small but substantial seasonal variations in Na are identified for latitudes less than 40°, where the maximum Na number densities are 3000-4000 atoms cm-3. At mid to high latitudes a clear seasonal variation with a winter maximum of up to 6000 atoms cm-3 is observed. The high latitudes, which are only measured in the Summer Hemisphere, have lower number densities with peak densities being approximately 1000 Na atoms cm-3. The full width at half maximum of the peak varies strongly at high latitudes and is 5 km near the polar summer mesopause, while it exceeds 10 km at lower latitudes. In summer the Na atom concentration at high latitudes and at altitudes below 88 km is significantly smaller than that at mid latitudes. The results are compared with other observations and models and there is overall a good agreement with these.
Phase measurements of EUV mask defects
Claus, Rene A.; Wang, Yow-Gwo; Wojdyla, Antoine; ...
2015-02-22
Extreme Ultraviolet (EUV) Lithography mask defects were examined on the actinic mask imaging system, SHARP, at Lawrence Berkeley National Laboratory. Also, a quantitative phase retrieval algorithm based on the Weak Object Transfer Function was applied to the measured through-focus aerial images to examine the amplitude and phase of the defects. The accuracy of the algorithm was demonstrated by comparing the results of measurements using a phase contrast zone plate and a standard zone plate. Using partially coherent illumination to measure frequencies that would otherwise fall outside the numerical aperture (NA), it was shown that some defects are smaller than themore » conventional resolution of the microscope. We found that the programmed defects of various sizes were measured and shown to have both an amplitude and a phase component that the algorithm is able to recover.« less
Computing aggregate properties of preimages for 2D cellular automata.
Beer, Randall D
2017-11-01
Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.
Under-reported data analysis with INAR-hidden Markov chains.
Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David
2016-11-20
In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Rago, Adam P; Marini, John; Duggan, Michael J; Beagle, John; Runyan, Gem; Sharma, Upma; Peev, Miroslav; King, David R
2015-03-01
We have previously described the hemostatic efficacy of a self-expanding polyurethane foam in lethal venous and arterial hemorrhage models. A number of critical translational questions remain, including prehospital diagnosis of hemorrhage, use with diaphragmatic injury, effects on spontaneous respiration, the role of omentum, and presence of a laparotomy on foam properties. In Experiment 1, diagnostic blood aspiration was attempted through a Veress needle before foam deployment during exsanguination (n = 53). In Experiment 2: a lethal hepatoportal injury/diaphragmatic laceration was created followed by foam (n = 6) or resuscitation (n = 10). In Experiment 3, the foam was deployed in naïve, spontaneously breathing animals (n = 7), and respiration was monitored. In Experiments 4 and 5, the foam was deployed above (n = 6) and below the omentum (n = 6) and in naïve animals (n = 6). Intra-abdominal pressure and organ contact were assessed. In Experiment 1, blood was successfully aspirated from a Veress needle in 70% of lethal iliac artery injuries and 100% of lethal hepatoportal injuries. In Experiment 2, in the presence of a diaphragm injury, between 0 cc and 110 cc of foam was found within the pleural space. Foam treatment resulted in a survival benefit relative to the control group at 1 hour (p = 0.03). In Experiment 3, hypercarbia was observed: mean (SD) Pco2 was 48 (9.4) mm Hg at baseline and 65 (14) mm Hg at 60 minutes. In Experiment 4, abdominal omentum seemed to influence organ contact and transport in two foam deployments. In Experiment 5, there was no difference in intra-abdominal pressure following foam deployment in the absence of a midline laparotomy. In a series of large animal studies, we addressed key translational issues surrounding safe use of foam treatment. These additional data, from diagnosis to deployment, will guide human experiences with foam treatment for massive abdominal exsanguination where no other treatments are available.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
Eickenberg, Michael; Rowekamp, Ryan J.; Kouh, Minjoon; Sharpee, Tatyana O.
2012-01-01
Our visual system is capable of recognizing complex objects even when their appearances change drastically under various viewing conditions. Especially in the higher cortical areas, the sensory neurons reflect such functional capacity in their selectivity for complex visual features and invariance to certain object transformations, such as image translation. Due to the strong nonlinearities necessary to achieve both the selectivity and invariance, characterizing and predicting the response patterns of these neurons represents a formidable computational challenge. A related problem is that such neurons are poorly driven by randomized inputs, such as white noise, and respond strongly only to stimuli with complex high-order correlations, such as natural stimuli. Here we describe a novel two-step optimization technique that can characterize both the shape selectivity and the range and coarseness of position invariance from neural responses to natural stimuli. One step in the optimization involves finding the template as the maximally informative dimension given the estimated spatial location where the response could have been triggered within each image. The estimates of the locations that triggered the response are subsequently updated in the next step. Under the assumption of a monotonic relationship between the firing rate and stimulus projections on the template at a given position, the most likely location is the one that has the largest projection on the estimate of the template. The algorithm shows quick convergence during optimization, and the estimation results are reliable even in the regime of small signal-to-noise ratios. When we apply the algorithm to responses of complex cells in the primary visual cortex (V1) to natural movies, we find that responses of the majority of cells were significantly better described by translation invariant models based on one template compared with position-specific models with several relevant features. PMID:22734487
NASA Astrophysics Data System (ADS)
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
Validation to Portuguese of the Scale of Student Satisfaction and Self-Confidence in Learning.
Almeida, Rodrigo Guimarães dos Santos; Mazzo, Alessandra; Martins, José Carlos Amado; Baptista, Rui Carlos Negrão; Girão, Fernanda Berchelli; Mendes, Isabel Amélia Costa
2015-01-01
Translate and validate to Portuguese the Scale of Student Satisfaction and Self-Confidence in Learning. Methodological translation and validation study of a research tool. After following all steps of the translation process, for the validation process, the event III Workshop Brazil - Portugal: Care Delivery to Critical Patients was created, promoted by one Brazilian and another Portuguese teaching institution. 103 nurses participated. As to the validity and reliability of the scale, the correlation pattern between the variables, the sampling adequacy test (Kaiser-Meyer-Olkin) and the sphericity test (Bartlett) showed good results. In the exploratory factorial analysis (Varimax), item 9 behaved better in factor 1 (Satisfaction) than in factor 2 (Self-confidence in learning). The internal consistency (Cronbach's alpha) showed coefficients of 0.86 in factor 1 with six items and 0.77 for factor 2 with 07 items. In Portuguese this tool was called: Escala de Satisfação de Estudantes e Autoconfiança na Aprendizagem. The results found good psychometric properties and a good potential use. The sampling size and specificity are limitations of this study, but future studies will contribute to consolidate the validity of the scale and strengthen its potential use.
Software for MR image overlay guided needle insertions: the clinical translation process
NASA Astrophysics Data System (ADS)
Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor
2013-03-01
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
Automation of the targeting and reflective alignment concept
NASA Technical Reports Server (NTRS)
Redfield, Robin C.
1992-01-01
The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.
The Katydid system for compiling KEE applications to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.
Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product
NASA Astrophysics Data System (ADS)
Weyrauch, Michael; Scholz, Daniel
2009-09-01
The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.
Compiling global name-space programs for distributed execution
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush
1990-01-01
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.
About crystal lattices and quasilattices in Euclidean space
NASA Astrophysics Data System (ADS)
Prokhoda, A. S.
2017-07-01
Definitions are given, based on which algorithms have been developed for constructing computer models of two-dimensional quasilattices and the corresponding quasiperiodic tilings in plane, the point symmetry groups of which are dihedral groups D m ( m = 5, 7, 8, 9, 10, 12, 14, 18), and the translation subgroups are free Abelian groups of the fourth or sixth rank. The angles at the tile vertices in the constructed tilings are calculated.
Improved mapping of the travelling salesman problem for quantum annealing
NASA Astrophysics Data System (ADS)
Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David
2015-03-01
We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.
A Formal Model of Ambiguity and its Applications in Machine Translation
2010-01-01
structure indicates linguisti- cally implausible segmentation that might be generated using dictionary - driven approaches...derivation. As was done in the monolingual case, the functions LHS, RHSi, RHSo and υ can be extended to a derivation δ. D(q) where q ∈V denotes the... monolingual parses. My algorithm runs more efficiently than O(n6) with many grammars (including those that required using heuristic search with other parsers
3D/2D image registration using weighted histogram of gradient directions
NASA Astrophysics Data System (ADS)
Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang
2015-03-01
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
Optical coherence tomography based angiography [Invited
Chen, Chieh-Li; Wang, Ruikang K.
2017-01-01
Optical coherence tomography (OCT)-based angiography (OCTA) provides in vivo, three-dimensional vascular information by the use of flowing red blood cells as intrinsic contrast agents, enabling the visualization of functional vessel networks within microcirculatory tissue beds non-invasively, without a need of dye injection. Because of these attributes, OCTA has been rapidly translated to clinical ophthalmology within a short period of time in the development. Various OCTA algorithms have been developed to detect the functional micro-vasculatures in vivo by utilizing different components of OCT signals, including phase-signal-based OCTA, intensity-signal-based OCTA and complex-signal-based OCTA. All these algorithms have shown, in one way or another, their clinical values in revealing micro-vasculatures in biological tissues in vivo, identifying abnormal vascular networks or vessel impairment zones in retinal and skin pathologies, detecting vessel patterns and angiogenesis in eyes with age-related macular degeneration and in skin and brain with tumors, and monitoring responses to hypoxia in the brain tissue. The purpose of this paper is to provide a technical oriented overview of the OCTA developments and their potential pre-clinical and clinical applications, and to shed some lights on its future perspectives. Because of its clinical translation to ophthalmology, this review intentionally places a slightly more weight on ophthalmic OCT angiography. PMID:28271003
Target identification using Zernike moments and neural networks
NASA Astrophysics Data System (ADS)
Azimi-Sadjadi, Mahmood R.; Jamshidi, Arta A.; Nevis, Andrew J.
2001-10-01
The development of an underwater target identification algorithm capable of identifying various types of underwater targets, such as mines, under different environmental conditions pose many technical problems. Some of the contributing factors are: targets have diverse sizes, shapes and reflectivity properties. Target emplacement environment is variable; targets may be proud or partially buried. Environmental properties vary significantly from one location to another. Bottom features such as sand, rocks, corals, and vegetation can conceal a target whether it is partially buried or proud. Competing clutter with responses that closely resemble those of the targets may lead to false positives. All the problems mentioned above contribute to overly difficult and challenging conditions that could lead to unreliable algorithm performance with existing methods. In this paper, we developed and tested a shape-dependent feature extraction scheme that provides features invariant to rotation, size scaling and translation; properties that are extremely useful for any target classification problem. The developed schemes were tested on an electro-optical imagery data set collected under different environmental conditions with variable background, range and target types. The electro-optic data set was collected using a Laser Line Scan (LLS) sensor by the Coastal Systems Station (CSS), located in Panama City, Florida. The performance of the developed scheme and its robustness to distortion, rotation, scaling and translation was also studied.
Exact solution of corner-modified banded block-Toeplitz eigensystems
NASA Astrophysics Data System (ADS)
Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza
2017-05-01
Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.
Targeting Cytosolic Nucleic Acid-Sensing Pathways for Cancer Immunotherapies.
Iurescia, Sandra; Fioretti, Daniela; Rinaldi, Monica
2018-01-01
The innate immune system provides the first line of defense against pathogen infection though also influences pathways involved in cancer immunosurveillance. The innate immune system relies on a limited set of germ line-encoded sensors termed pattern recognition receptors (PRRs), signaling proteins and immune response factors. Cytosolic receptors mediate recognition of danger damage-associated molecular patterns (DAMPs) signals. Once activated, these sensors trigger multiple signaling cascades, converging on the production of type I interferons and proinflammatory cytokines. Recent studies revealed that PRRs respond to nucleic acids (NA) released by dying, damaged, cancer cells, as danger DAMPs signals, and presence of signaling proteins across cancer types suggests that these signaling mechanisms may be involved in cancer biology. DAMPs play important roles in shaping adaptive immune responses through the activation of innate immune cells and immunological response to danger DAMPs signals is crucial for the host response to cancer and tumor rejection. Furthermore, PRRs mediate the response to NA in several vaccination strategies, including DNA immunization. As route of double-strand DNA intracellular entry, DNA immunization leads to expression of key components of cytosolic NA-sensing pathways. The involvement of NA-sensing mechanisms in the antitumor response makes these pathways attractive drug targets. Natural and synthetic agonists of NA-sensing pathways can trigger cell death in malignant cells, recruit immune cells, such as DCs, CD8 + T cells, and NK cells, into the tumor microenvironment and are being explored as promising adjuvants in cancer immunotherapies. In this minireview, we discuss how cGAS-STING and RIG-I-MAVS pathways have been targeted for cancer treatment in preclinical translational researches. In addition, we present a targeted selection of recent clinical trials employing agonists of cytosolic NA-sensing pathways showing how these pathways are currently being targeted for clinical application in oncology.
Nedd4-2 Modulates Renal Na+-Cl− Cotransporter via the Aldosterone-SGK1-Nedd4-2 Pathway
Arroyo, Juan Pablo; Lagnaz, Dagmara; Ronzaud, Caroline; Vázquez, Norma; Ko, Benjamin S.; Moddes, Lauren; Ruffieux-Daidié, Dorothée; Hausel, Pierrette; Koesters, Robert; Yang, Baoli; Stokes, John B.; Hoover, Robert S.
2011-01-01
Regulation of renal Na+ transport is essential for controlling blood pressure, as well as Na+ and K+ homeostasis. Aldosterone stimulates Na+ reabsorption by the Na+-Cl− cotransporter (NCC) in the distal convoluted tubule (DCT) and by the epithelial Na+ channel (ENaC) in the late DCT, connecting tubule, and collecting duct. Aldosterone increases ENaC expression by inhibiting the channel's ubiquitylation and degradation; aldosterone promotes serum-glucocorticoid-regulated kinase SGK1-mediated phosphorylation of the ubiquitin-protein ligase Nedd4-2 on serine 328, which prevents the Nedd4-2/ENaC interaction. It is important to note that aldosterone increases NCC protein expression by an unknown post-translational mechanism. Here, we present evidence that Nedd4-2 coimmunoprecipitated with NCC and stimulated NCC ubiquitylation at the surface of transfected HEK293 cells. In Xenopus laevis oocytes, coexpression of NCC with wild-type Nedd4-2, but not its catalytically inactive mutant, strongly decreased NCC activity and surface expression. SGK1 prevented this inhibition in a kinase-dependent manner. Furthermore, deficiency of Nedd4-2 in the renal tubules of mice and in cultured mDCT15 cells upregulated NCC. In contrast to ENaC, Nedd4-2-mediated inhibition of NCC did not require the PY-like motif of NCC. Moreover, the mutation of Nedd4-2 at either serine 328 or 222 did not affect SGK1 action, and mutation at both sites enhanced Nedd4-2 activity and abolished SGK1-dependent inhibition. Taken together, these results suggest that aldosterone modulates NCC protein expression via a pathway involving SGK1 and Nedd4-2 and provides an explanation for the well-known aldosterone-induced increase in NCC protein expression. PMID:21852580
Nedelescu, Hermina; Chowdhury, Tara G; Wable, Gauri S; Arbuthnott, Gordon; Aoki, Chiye
2017-01-01
The vermis or "spinocerebellum" receives input from the spinal cord and motor cortex for controlling balance and locomotion, while the longitudinal hemisphere region or "cerebro-cerebellum" is interconnected with non-motor cortical regions, including the prefrontal cortex that underlies decision-making. Noradrenaline release in the cerebellum is known to be important for motor plasticity but less is known about plasticity of the cerebellar noradrenergic (NA) system, itself. We characterized plasticity of dopamine β-hydroxylase-immunoreactive NA fibers in the cerebellum of adolescent female rats that are evoked by voluntary wheel running, food restriction (FR) or by both, in combination. When 8 days of wheel access was combined with FR during the last 4 days, some responded with excessive exercise, choosing to run even during the hours of food access: this exacerbated weight loss beyond that due to FR alone. In the vermis, exercise, with or without FR, shortened the inter-varicosity intervals and increased varicosity density along NA fibers, while excessive exercise, due to FR, also shortened NA fibers. In contrast, the hemisphere required the FR-evoked excessive exercise to evoke shortened inter-varicosity intervals along NA fibers and this change was exhibited more strongly by rats that suppressed the FR-evoked excessive exercise, a behavior that minimized weight loss. Presuming that shortened inter-varicosity intervals translate to enhanced NA release and synthesis of norepinephrine, this enhancement in the cerebellar hemisphere may contribute towards protection of individuals from the life-threatening activity-based anorexia via relays with higher-order cortical areas that mediate the animal's decision to suppress the innate FR-evoked hyperactivity.
Naïve Bayes Approach for Expert System Design of Children Skin Identification Based on Android
NASA Astrophysics Data System (ADS)
Hartatik; Purnomo, A.; Hartono, R.; Munawaroh, H.
2018-03-01
The development of technology gives some benefits to each person that we can use it properly and correctly. Technology has helped humans in every way. Such as the excess task of an expert in providing information or answers to a problem. Thus problem that often occurs is skin disease that affecting on child. That because the skin of children still vulnerable to the environment. The application was developed using the naïve Bayes algorithm. Through this application, users can consult with a system like an expert to know the symptoms that occur to the child and find the correct treatment to solve the problems.
1990-05-07
Sawin, and R. A. Brown J-3 COMPARISON OF CF AND CF2 LIF AND ACTINOMETRY IN A CF4 DISCHARGE L. D. Baston , J.-P. Nicolai, and H. H. Sawin J-4 SPATIAL...the algorithm. 109 Comparison of CF and CF7 LIF and Actinometry in a CF4 Discharge, L. D. BASTON , J.-P. NICOLAI, and H. H. SAWIN, MIT- Relative...28Baravian, G. E-32 Clark, R.E.H. J-27 Bardsley, J. N. BC-3, MA-4, NB-1 Clark, S. BB-2Bartnikas, R. NA-16 Colbert, T. NA-30 Baston , L. D. J-3 Colgan, M
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Application of Micro-segmentation Algorithms to the Healthcare Market:A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Aline, Frank
We draw inspiration from the recent success of loyalty programs and targeted personalized market campaigns of retail companies such as Kroger, Netflix, etc. to understand beneficiary behaviors in the healthcare system. Our posit is that we can emulate the financial success the companies have achieved by better understanding and predicting customer behaviors and translating such success to healthcare operations. Towards that goal, we survey current practices in market micro-segmentation research and analyze health insurance claims data using those algorithms. We present results and insights from micro-segmentation of the beneficiaries using different techniques and discuss how the interpretation can assist withmore » matching the cost-effective insurance payment models to the beneficiary micro-segments.« less
Collective translational and rotational Monte Carlo cluster move for general pairwise interaction
NASA Astrophysics Data System (ADS)
Růžička, Štěpán; Allen, Michael P.
2014-09-01
Virtual move Monte Carlo is a cluster algorithm which was originally developed for strongly attractive colloidal, molecular, or atomistic systems in order to both approximate the collective dynamics and avoid sampling of unphysical kinetic traps. In this paper, we present the algorithm in the form, which selects the moving cluster through a wider class of virtual states and which is applicable to general pairwise interactions, including hard-core repulsion. The newly proposed way of selecting the cluster increases the acceptance probability by up to several orders of magnitude, especially for rotational moves. The results have their applications in simulations of systems interacting via anisotropic potentials both to enhance the sampling of the phase space and to approximate the dynamics.
Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong
2012-12-01
In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Snyder, P M; Cheng, C; Prince, L S; Rogers, J C; Welsh, M J
1998-01-09
Members of the DEG/ENaC protein family form ion channels with diverse functions. DEG/ENaC subunits associate as hetero- and homomultimers to generate channels; however the stoichiometry of these complexes is unknown. To determine the subunit stoichiometry of the human epithelial Na+ channel (hENaC), we expressed the three wild-type hENaC subunits (alpha, beta, and gamma) with subunits containing mutations that alter channel inhibition by methanethiosulfonates. The data indicate that hENaC contains three alpha, three beta, and three gamma subunits. Sucrose gradient sedimentation of alphahENaC translated in vitro, as well as alpha-, beta-, and gammahENaC coexpressed in cells, was consistent with complexes containing nine subunits. FaNaCh and BNC1, two related DEG/ENaC channels, produced complexes of similar mass. Our results suggest a novel nine-subunit stoichiometry for the DEG/ENaC family of ion channels.
Digital health technology and trauma: development of an app to standardize care.
Hsu, Jeremy M
2015-04-01
Standardized practice results in less variation, therefore reducing errors and improving outcome. Optimal trauma care is achieved through standardization, as is evidenced by the widespread adoption of the Advanced Trauma Life Support approach. The challenge for an individual institution is how does one educate and promulgate these standardized processes widely and efficiently? In today's world, digital health technology must be considered in the process. The aim of this study was to describe the process of developing an app, which includes standardized trauma algorithms. The objective of the app was to allow easy, real-time access to trauma algorithms, and therefore reduce omissions/errors. A set of trauma algorithms, relevant to the local setting, was derived from the best available evidence. After obtaining grant funding, a collaborative endeavour was undertaken with an external specialist app developing company. The process required 6 months to translate the existing trauma algorithms into an app. The app contains 32 separate trauma algorithms, formatted as a single-page flow diagram. It utilizes specific smartphone features such as 'pinch to zoom', jump-words and pop-ups to allow rapid access to the desired information. Improvements in trauma care outcomes result from reducing variation. By incorporating digital health technology, a trauma app has been developed, allowing easy and intuitive access to evidenced-based algorithms. © 2015 Royal Australasian College of Surgeons.
DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.
Kalsi, Shruti; Kaur, Harleen; Chang, Victor
2017-12-05
Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.
RNA design using simulated SHAPE data.
Lotfi, Mohadeseh; Zare-Mirakabad, Fatemeh; Montaseri, Soheila
2018-05-03
It has long been established that in addition to being involved in protein translation, RNA plays essential roles in numerous other cellular processes, including gene regulation and DNA replication. Such roles are known to be dictated by higher-order structures of RNA molecules. It is therefore of prime importance to find an RNA sequence that can fold to acquire a particular function that is desirable for use in pharmaceuticals and basic research. The challenge of finding an RNA sequence for a given structure is known as the RNA design problem. Although there are several algorithms to solve this problem, they mainly consider hard constraints, such as minimum free energy, to evaluate the predicted sequences. Recently, SHAPE data has emerged as a new soft constraint for RNA secondary structure prediction. To take advantage of this new experimental constraint, we report here a new method for accurate design of RNA sequences based on their secondary structures using SHAPE data as pseudo-free energy. We then compare our algorithm with four others: INFO-RNA, ERD, MODENA and RNAifold 2.0. Our algorithm precisely predicts 26 out of 29 new sequences for the structures extracted from the Rfam dataset, while the other four algorithms predict no more than 22 out of 29. The proposed algorithm is comparable to the above algorithms on RNA-SSD datasets, where they can predict up to 33 appropriate sequences for RNA secondary structures out of 34.
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Kagome fiber based ultrafast laser microsurgery probe delivering micro-Joule pulse energies
Subramanian, Kaushik; Gabay, Ilan; Ferhanoğlu, Onur; Shadfan, Adam; Pawlowski, Michal; Wang, Ye; Tkaczyk, Tomasz; Ben-Yakar, Adela
2016-01-01
We present the development of a 5 mm, piezo-actuated, ultrafast laser scalpel for fast tissue microsurgery. Delivery of micro-Joules level energies to the tissue was made possible by a large, 31 μm, air-cored inhibited-coupling Kagome fiber. We overcome the fiber’s low NA by using lenses made of high refractive index ZnS, which produced an optimal focusing condition with 0.23 NA objective. The optical design achieved a focused laser spot size of 4.5 μm diameter covering a 75 × 75 μm2 scan area in a miniaturized setting. The probe could deliver the maximum available laser power, achieving an average fluence of 7.8 J/cm2 on the tissue surface at 62% transmission efficiency. Such fluences could produce uninterrupted, 40 μm deep cuts at translational speeds of up to 5 mm/s along the tissue. We predicted that the best combination of speed and coverage exists at 8 mm/s for our conditions. The onset of nonlinear absorption in ZnS, however, limited the probe’s energy delivery capabilities to 1.4 μJ for linear operation at 1.5 picosecond pulse-widths of our fiber laser. Alternatives like broadband CaF2 crystals should mitigate such nonlinear limiting behavior. Improved opto-mechanical design and appropriate material selection should allow substantially higher fluence delivery and propel such Kagome fiber-based scalpels towards clinical translation. PMID:27896003
Bose, Jayakumar; Rodrigo-Moreno, Ana; Lai, Diwen; Xie, Yanjie; Shen, Wenbiao; Shabala, Sergey
2015-01-01
Background and Aims The activity of H+-ATPase is essential for energizing the plasma membrane. It provides the driving force for potassium retention and uptake through voltage-gated channels and for Na+ exclusion via Na+/H+ exchangers. Both of these traits are central to plant salinity tolerance; however, whether the increased activity of H+-ATPase is a constitutive trait in halophyte species and whether this activity is upregulated at either the transcriptional or post-translation level remain disputed. Methods The kinetics of salt-induced net H+, Na+ and K+ fluxes, membrane potential and AHA1/2/3 expression changes in the roots of two halophyte species, Atriplex lentiformis (saltbush) and Chenopodium quinoa (quinoa), were compared with data obtained from Arabidopsis thaliana roots. Key Results Intrinsic (steady-state) membrane potential values were more negative in A. lentiformis and C. quinoa compared with arabidopsis (−144 ± 3·3, −138 ± 5·4 and −128 ± 3·3 mV, respectively). Treatment with 100 mm NaCl depolarized the root plasma membrane, an effect that was much stronger in arabidopsis. The extent of plasma membrane depolarization positively correlated with NaCl-induced stimulation of vanadate-sensitive H+ efflux, Na+ efflux and K+ retention in roots (quinoa > saltbush > arabidopsis). NaCl-induced stimulation of H+ efflux was most pronounced in the root elongation zone. In contrast, H+-ATPase AHA transcript levels were much higher in arabidopsis compared with quinoa plants, and 100 mm NaCl treatment led to a further 3-fold increase in AHA1 and AHA2 transcripts in arabidopsis but not in quinoa. Conclusions Enhanced salinity tolerance in the halophyte species studied here is not related to the constitutively higher AHA transcript levels in the root epidermis, but to the plant’s ability to rapidly upregulate plasma membrane H+-ATPase upon salinity treatment. This is necessary for assisting plants to maintain highly negative membrane potential values and to exclude Na+, or enable better K+ retention in the cytosol under saline conditions. PMID:25471095
NASA Astrophysics Data System (ADS)
Dutcher, Bryce
Strong evidence exists suggesting that anthropogenic emissions of CO 2, primarily from the combustion of fossil fuels, have been contributing to global climate change, including warming of the atmosphere and acidification of the oceans. These, in turn, lead to other effects such as melting of ice and snow cover, rising sea levels, severe weather patterns, and extinction of life forms. With these detrimental shifts in ecosystems already being observed, it becomes imperative to mitigate anthropogenic CO2. CO2 capture is typically a costly operation, usually due to the energy required for regeneration of the capture medium. Na2CO3 is one potential capture medium with the potential to decrease this energy requirement. Extensively researched as a potential sorbent for CO2, Na2CO3 is well known for its theoretically low energy requirement, due largely to its relatively low heat of reaction compared to other capture technologies. Its primary pitfalls, however, are its extremely low reaction rate during sorption and slow regeneration of Na2CO 3. Before Na2CO3 can be used as a CO2 sorbent, then, it is critical to increase its reaction rate. In order to do so, this project studied nanoporous FeOOH as a potential supporting material for Na2CO3. Because regeneration of the sorbent is the most energy-intensive step when using Na2CO3 for CO 2 sorption, this project focused on the decomposition of NaHCO 3, which is equivalent to CO2 desorption. Using BET, FTIR, XRD, XPS, SEM, TEM, magnetic susceptibility tests, and Mossbauer spectroscopy, we show FeOOH to be thermally stable both with and without the presence of NaHCO3 at temperatures necessary for sorption and regeneration, up to about 200°C. More significantly, we observe that FeOOH not only increases the surface area of NaHCO3, but also has a catalytic effect on the decomposition of NaHCO3, reducing activation energy from 80 kJ/mol to 44 kJ/mol. This reduction in activation energy leads to a significant increase in the reaction rate by a factor of nearly 50, which could translate into a substantial decrease in the cost of using Na2 CO3 for CO2 capture.
NASA Technical Reports Server (NTRS)
Jani, Yashvant
1992-01-01
The reinforcement learning techniques developed at Ames Research Center are being applied to proximity and docking operations using the Shuttle and Solar Maximum Mission (SMM) satellite simulation. In utilizing these fuzzy learning techniques, we also use the Approximate Reasoning based Intelligent Control (ARIC) architecture, and so we use two terms interchangeable to imply the same. This activity is carried out in the Software Technology Laboratory utilizing the Orbital Operations Simulator (OOS). This report is the deliverable D3 in our project activity and provides the test results of the fuzzy learning translational controller. This report is organized in six sections. Based on our experience and analysis with the attitude controller, we have modified the basic configuration of the reinforcement learning algorithm in ARIC as described in section 2. The shuttle translational controller and its implementation in fuzzy learning architecture is described in section 3. Two test cases that we have performed are described in section 4. Our results and conclusions are discussed in section 5, and section 6 provides future plans and summary for the project.
Interior reconstruction method based on rotation-translation scanning model.
Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian
2014-01-01
In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.
Managing the innovation supply chain to maximize personalized medicine.
Waldman, S A; Terzic, A
2014-02-01
Personalized medicine epitomizes an evolving model of care tailored to the individual patient. This emerging paradigm harnesses radical technological advances to define each patient's molecular characteristics and decipher his or her unique pathophysiological processes. Translated into individualized algorithms, personalized medicine aims to predict, prevent, and cure disease without producing therapeutic adverse events. Although the transformative power of personalized medicine is generally recognized by physicians, patients, and payers, the complexity of translating discoveries into new modalities that transform health care is less appreciated. We often consider the flow of innovation and technology along a continuum of discovery, development, regulation, and application bridging the bench with the bedside. However, this process also can be viewed through a complementary prism, as a necessary supply chain of services and providers, each making essential contributions to the development of the final product to maximize value to consumers. Considering personalized medicine in this context of supply chain management highlights essential points of vulnerability and/or scalability that can ultimately constrain translation of the biological revolution or potentiate it into individualized diagnostics and therapeutics for optimized value creation and delivery.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Distribution and diversity of ribosome binding sites in prokaryotic genomes.
Omotajo, Damilola; Tate, Travis; Cho, Hyuk; Choudhary, Madhusudan
2015-08-14
Prokaryotic translation initiation involves the proper docking, anchoring, and accommodation of mRNA to the 30S ribosomal subunit. Three initiation factors (IF1, IF2, and IF3) and some ribosomal proteins mediate the assembly and activation of the translation initiation complex. Although the interaction between Shine-Dalgarno (SD) sequence and its complementary sequence in the 16S rRNA is important in initiation, some genes lacking an SD ribosome binding site (RBS) are still well expressed. The objective of this study is to examine the pattern of distribution and diversity of RBS in fully sequenced bacterial genomes. The following three hypotheses were tested: SD motifs are prevalent in bacterial genomes; all previously identified SD motifs are uniformly distributed across prokaryotes; and genes with specific cluster of orthologous gene (COG) functions differ in their use of SD motifs. Data for 2,458 bacterial genomes, previously generated by Prodigal (PROkaryotic DYnamic programming Gene-finding ALgorithm) and currently available at the National Center for Biotechnology Information (NCBI), were analyzed. Of the total genes examined, ~77.0% use an SD RBS, while ~23.0% have no RBS. Majority of the genes with the most common SD motifs are distributed in a manner that is representative of their abundance for each COG functional category, while motifs 13 (5'-GGA-3'/5'-GAG-3'/5'-AGG-3') and 27 (5'-AGGAGG-3') appear to be predominantly used by genes for information storage and processing, and translation and ribosome biogenesis, respectively. These findings suggest that an SD sequence is not obligatory for translation initiation; instead, other signals, such as the RBS spacer, may have an overarching influence on translation of mRNAs. Subsequent analyses of the 5' secondary structure of these mRNAs may provide further insight into the translation initiation mechanism.
Trajectory NG: portable, compressed, general molecular dynamics trajectories.
Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David
2011-10-01
We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.
ERIC Educational Resources Information Center
Lombardi, Allison; Seburn, Mary; Conley, David
2011-01-01
In this study, Don't Know/Not Applicable (DK/NA) responses on a measure of academic behaviors associated with college readiness for high school students were treated with: (a) casewise deletion, (b) scale inclusion at the lowest level, and (c) imputation using E/M algorithm. Significant differences in mean responses according to treatment…
Thai Automatic Speech Recognition
2005-01-01
used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching
When Machines Think: Radiology's Next Frontier.
Dreyer, Keith J; Geis, J Raymond
2017-12-01
Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.
NASA Astrophysics Data System (ADS)
Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio
2012-12-01
We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon
2017-01-01
This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025
Clinical algorithms to aid osteoarthritis guideline dissemination.
Meneses, S R F; Goode, A P; Nelson, A E; Lin, J; Jordan, J M; Allen, K D; Bennell, K L; Lohmander, L S; Fernandes, L; Hochberg, M C; Underwood, M; Conaghan, P G; Liu, S; McAlindon, T E; Golightly, Y M; Hunter, D J
2016-09-01
Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment algorithm to facilitate translation of evidence into practice. We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing trustworthy CPGs as established by the National Academy of Medicine (NAM). Four case scenarios and algorithms were developed by consensus of a multidisciplinary panel. Sixteen guidelines were included in the systematic review. Most recommendations were directed toward physicians and allied health professionals, and most had multi-disciplinary input. Analysis for trustworthiness suggests that many guidelines still present a lack of transparency. A treatment algorithm was developed for each case scenario advised by recommendations from guidelines and based on panel consensus. Strategies to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Graphics Processors in HEP Low-Level Trigger Systems
NASA Astrophysics Data System (ADS)
Ammendola, Roberto; Biagioni, Andrea; Chiozzi, Stefano; Cotta Ramusino, Angelo; Cretaro, Paolo; Di Lorenzo, Stefano; Fantechi, Riccardo; Fiorini, Massimiliano; Frezza, Ottorino; Lamanna, Gianluca; Lo Cicero, Francesca; Lonardo, Alessandro; Martinelli, Michele; Neri, Ilaria; Paolucci, Pier Stanislao; Pastorelli, Elena; Piandani, Roberto; Pontisso, Luca; Rossetti, Davide; Simula, Francesco; Sozzi, Marco; Vicini, Piero
2016-11-01
Usage of Graphics Processing Units (GPUs) in the so called general-purpose computing is emerging as an effective approach in several fields of science, although so far applications have been employing GPUs typically for offline computations. Taking into account the steady performance increase of GPU architectures in terms of computing power and I/O capacity, the real-time applications of these devices can thrive in high-energy physics data acquisition and trigger systems. We will examine the use of online parallel computing on GPUs for the synchronous low-level trigger, focusing on tests performed on the trigger system of the CERN NA62 experiment. To successfully integrate GPUs in such an online environment, latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Furthermore, it is assessed how specific trigger algorithms can be parallelized and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen Large Hadron Collider (LHC) luminosity upgrade where highly selective algorithms will be essential to maintain sustainable trigger rates with very high pileup.
Desiderata for computable representations of electronic health records-driven phenotype algorithms.
Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A
2015-11-01
Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
NASA Astrophysics Data System (ADS)
Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.
2015-12-01
The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.
Is Self-organization a Rational Expectation?
NASA Astrophysics Data System (ADS)
Luediger, Heinz
Over decades and under varying names the study of biology-inspired algorithms applied to non-living systems has been the subject of a small and somewhat exotic research community. Only the recent coincidence of a growing inability to master the design, development and operation of increasingly intertwined systems and processes, and an accelerated trend towards a naïve if not romanticizing view of nature in the sciences, has led to the adoption of biology-inspired algorithmic research by a wider range of sciences. Adaptive systems, as we apparently observe in nature, are meanwhile viewed as a promising way out of the complexity trap and, propelled by a long list of ‘self’ catchwords, complexity research has become an influential stream in the science community. This paper presents four provocative theses that cast doubt on the strategic potential of complexity research and the viability of large scale deployment of biology-inspired algorithms in an expectation driven world.
Piccinelli, Marina; Faber, Tracy L; Arepalli, Chesnal D; Appia, Vikram; Vinten-Johansen, Jakob; Schmarkey, Susan L; Folks, Russell D; Garcia, Ernest V; Yezzi, Anthony
2014-02-01
Accurate alignment between cardiac CT angiographic studies (CTA) and nuclear perfusion images is crucial for improved diagnosis of coronary artery disease. This study evaluated in an animal model the accuracy of a CTA fully automated biventricular segmentation algorithm, a necessary step for automatic and thus efficient PET/CT alignment. Twelve pigs with acute infarcts were imaged using Rb-82 PET and 64-slice CTA. Post-mortem myocardium mass measurements were obtained. Endocardial and epicardial myocardial boundaries were manually and automatically detected on the CTA and both segmentations used to perform PET/CT alignment. To assess the segmentation performance, image-based myocardial masses were compared to experimental data; the hand-traced profiles were used as a reference standard to assess the global and slice-by-slice robustness of the automated algorithm in extracting myocardium, LV, and RV. Mean distances between the automated and the manual 3D segmented surfaces were computed. Finally, differences in rotations and translations between the manual and automatic surfaces were estimated post-PET/CT alignment. The largest, smallest, and median distances between interactive and automatic surfaces averaged 1.2 ± 2.1, 0.2 ± 1.6, and 0.7 ± 1.9 mm. The average angular and translational differences in CT/PET alignments were 0.4°, -0.6°, and -2.3° about x, y, and z axes, and 1.8, -2.1, and 2.0 mm in x, y, and z directions. Our automatic myocardial boundary detection algorithm creates surfaces from CTA that are similar in accuracy and provide similar alignments with PET as those obtained from interactive tracing. Specific difficulties in a reliable segmentation of the apex and base regions will require further improvements in the automated technique.
On the Use of CAD-Native Predicates and Geometry in Surface Meshing
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.
1999-01-01
Several paradigms for accessing CAD geometry during surface meshing for CFD are discussed. File translation, inconsistent geometry engines and non-native point construction are all identified as sources of non-robustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing. The native approach is demonstrated through an algorithm for the generation of closed manifold surface triangulations from CAD geometry. CAD parts and assemblies are used in their native format, and a part's native geometry engine is accessed through a modeler-independent application programming interface (API). In seeking a robust and fully automated procedure, the algorithm is based on a new physical space manifold triangulation technique specially developed to avoid robustness issues associated with poorly conditioned mappings. In addition, this approach avoids the usual ambiguities associated with floating-point predicate evaluation on constructed coordinate geometry in a mapped space. The technique is incremental, so that each new site improves the triangulation by some well defined quality measure. The algorithm terminates after achieving a prespecified measure of mesh quality and produces a triangulation such that no angle is less than a given angle bound, a or greater than pi - 2alpha. This result also sets bounds on the maximum vertex degree, triangle aspect-ratio and maximum stretching rate for the triangulation. In addition to the output triangulations for a variety of CAD parts, the discussion presents related theoretical results which assert the existence of such an angle bound, and demonstrate that maximum bounds of between 25 deg and 30 deg may be achieved in practice.
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
Pediatric negative appendectomy rate: trend, predictors, and differentials.
Oyetunji, Tolulope A; Ong'uti, Sharon K; Bolorunduro, Oluwaseyi B; Cornwell, Edward E; Nwomeh, Benedict C
2012-03-01
Appendectomy is one of the most commonly performed emergency operations in children. The diagnosis of appendicitis can be quite challenging, particularly in children. We set out to determine the accuracy of diagnosis of appendicitis by analyzing the trends in the negative appendectomy rate (NAR) using a national database. Analysis of the Kids Inpatient Database (KID) was performed for the years 2000, 2003, and 2006 on children with appendectomy, excluding incidental appendectomies. Children (<18 y) without appendicitis but who underwent appendectomies were classified as negative appendectomies (NA), and those with appendicitis as positive appendectomies (PA). Comparisons were made between those with PA versus NA by demographic characteristics. The subset of patients with NA was then further analyzed. An estimated 250,783 appendectomies met the inclusion criteria. The NAR was 6.7%. Length of stay (LOS) was longer in NA versus PA (7 versus 3 d, P < 0.05). The NAR was increased in children under 5 y (21.1% versus 5.4% for among the 5-10 y versus 5.9% among the >10 y, P < 0.0001) and in females (9.3% versus 5.1%, P < 0.001). On multivariate analysis, increasing age was associated with lower odds of NA (OR = 0.92, P < 0.001). Females, rural hospitals, and Blacks were significantly more likely to experience NA. Younger age, female gender, Black ethnicity and rural hospitals are independent predictors of NA. These factors can be incorporated into diagnostic algorithms to improve the accuracy of diagnosis of appendicitis in children. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Schweizer, Kenneth S.
2012-04-01
We generalize the microscopic naïve mode coupling and nonlinear Langevin equation theories of the coupled translation-rotation dynamics of dense suspensions of uniaxial colloids to treat the effect of applied stress on shear elasticity, cooperative cage escape, structural relaxation, and dynamic and static yielding. The key concept is a stress-dependent dynamic free energy surface that quantifies the center-of-mass force and torque on a moving colloid. The consequences of variable particle aspect ratio and volume fraction, and the role of plastic versus double glasses, are established in the context of dense, glass-forming suspensions of hard-core dicolloids. For low aspect ratios, the theory provides a microscopic basis for the recently observed phenomenon of double yielding as a consequence of stress-driven sequential unlocking of caging constraints via reduction of the distinct entropic barriers associated with the rotational and translational degrees of freedom. The existence, and breadth in volume fraction, of the double yielding phenomena is predicted to generally depend on both the degree of particle anisotropy and experimental probing frequency, and as a consequence typically occurs only over a window of (high) volume fractions where there is strong decoupling of rotational and translational activated relaxation. At high enough concentrations, a return to single yielding is predicted. For large aspect ratio dicolloids, rotation and translation are always strongly coupled in the activated barrier hopping event, and hence for all stresses only a single yielding process is predicted.
NASA Astrophysics Data System (ADS)
Mi, Yuhe; Huang, Yifan; Li, Lin
2015-08-01
Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.
NASA Astrophysics Data System (ADS)
Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.
2016-12-01
With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.
Algorithmic psychometrics and the scalable subject.
Stark, Luke
2018-04-01
Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
Fuzzy-Based Hybrid Control Algorithm for the Stabilization of a Tri-Rotor UAV.
Ali, Zain Anwar; Wang, Daobo; Aamir, Muhammad
2016-05-09
In this paper, a new and novel mathematical fuzzy hybrid scheme is proposed for the stabilization of a tri-rotor unmanned aerial vehicle (UAV). The fuzzy hybrid scheme consists of a fuzzy logic controller, regulation pole-placement tracking (RST) controller with model reference adaptive control (MRAC), in which adaptive gains of the RST controller are being fine-tuned by a fuzzy logic controller. Brushless direct current (BLDC) motors are installed in the triangular frame of the tri-rotor UAV, which helps maintain control on its motion and different altitude and attitude changes, similar to rotorcrafts. MRAC-based MIT rule is proposed for system stability. Moreover, the proposed hybrid controller with nonlinear flight dynamics is shown in the presence of translational and rotational velocity components. The performance of the proposed algorithm is demonstrated via MATLAB simulations, in which the proposed fuzzy hybrid controller is compared with the existing adaptive RST controller. It shows that our proposed algorithm has better transient performance with zero steady-state error, and fast convergence towards stability.
Spatially variant morphological restoration and skeleton representation.
Bouaynaya, Nidhal; Charif-Chefchaouni, Mohammed; Schonfeld, Dan
2006-11-01
The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.
Molecular motions that shape the cardiac action potential: Insights from voltage clamp fluorometry.
Zhu, Wandi; Varga, Zoltan; Silva, Jonathan R
2016-01-01
Very recently, voltage-clamp fluorometry (VCF) protocols have been developed to observe the membrane proteins responsible for carrying the ventricular ionic currents that form the action potential (AP), including those carried by the cardiac Na(+) channel, NaV1.5, the L-type Ca(2+) channel, CaV1.2, the Na(+)/K(+) ATPase, and the rapid and slow components of the delayed rectifier, KV11.1 and KV7.1. This development is significant, because VCF enables simultaneous observation of ionic current kinetics with conformational changes occurring within specific channel domains. The ability gained from VCF, to connect nanoscale molecular movement to ion channel function has revealed how the voltage-sensing domains (VSDs) control ion flux through channel pores, mechanisms of post-translational regulation and the molecular pathology of inherited mutations. In the future, we expect that this data will be of great use for the creation of multi-scale computational AP models that explicitly represent ion channel conformations, connecting molecular, cell and tissue electrophysiology. Here, we review the VCF protocol, recent results, and discuss potential future developments, including potential use of these experimental findings to create novel computational models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Passini, Elisa; Britton, Oliver J; Lu, Hua Rong; Rohrbacher, Jutta; Hermans, An N; Gallacher, David J; Greig, Robert J H; Bueno-Orovio, Alfonso; Rodriguez, Blanca
2017-01-01
Early prediction of cardiotoxicity is critical for drug development. Current animal models raise ethical and translational questions, and have limited accuracy in clinical risk prediction. Human-based computer models constitute a fast, cheap and potentially effective alternative to experimental assays, also facilitating translation to human. Key challenges include consideration of inter-cellular variability in drug responses and integration of computational and experimental methods in safety pharmacology. Our aim is to evaluate the ability of in silico drug trials in populations of human action potential (AP) models to predict clinical risk of drug-induced arrhythmias based on ion channel information, and to compare simulation results against experimental assays commonly used for drug testing. A control population of 1,213 human ventricular AP models in agreement with experimental recordings was constructed. In silico drug trials were performed for 62 reference compounds at multiple concentrations, using pore-block drug models (IC 50 /Hill coefficient). Drug-induced changes in AP biomarkers were quantified, together with occurrence of repolarization/depolarization abnormalities. Simulation results were used to predict clinical risk based on reports of Torsade de Pointes arrhythmias, and further evaluated in a subset of compounds through comparison with electrocardiograms from rabbit wedge preparations and Ca 2+ -transient recordings in human induced pluripotent stem cell-derived cardiomyocytes (hiPS-CMs). Drug-induced changes in silico vary in magnitude depending on the specific ionic profile of each model in the population, thus allowing to identify cell sub-populations at higher risk of developing abnormal AP phenotypes. Models with low repolarization reserve (increased Ca 2+ /late Na + currents and Na + /Ca 2+ -exchanger, reduced Na + /K + -pump) are highly vulnerable to drug-induced repolarization abnormalities, while those with reduced inward current density (fast/late Na + and Ca 2+ currents) exhibit high susceptibility to depolarization abnormalities. Repolarization abnormalities in silico predict clinical risk for all compounds with 89% accuracy. Drug-induced changes in biomarkers are in overall agreement across different assays: in silico AP duration changes reflect the ones observed in rabbit QT interval and hiPS-CMs Ca 2+ -transient, and simulated upstroke velocity captures variations in rabbit QRS complex. Our results demonstrate that human in silico drug trials constitute a powerful methodology for prediction of clinical pro-arrhythmic cardiotoxicity, ready for integration in the existing drug safety assessment pipelines.
Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora
2001-01-01
monolingual dictionary - derived list of canonical roots would resolve ambiguity re- garding which is the appropriate target. � Many of the errors are...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity...corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilin- gual parsing
Axial field shaping under high-numerical-aperture focusing
NASA Astrophysics Data System (ADS)
Jabbour, Toufic G.; Kuebler, Stephen M.
2007-03-01
Kant reported [J. Mod. Optics47, 905 (2000)] a formulation for solving the inverse problem of vector diffraction, which accurately models high-NA focusing. Here, Kant's formulation is adapted to the method of generalized projections to obtain an algorithm for designing diffractive optical elements (DOEs) that reshape the axial point-spread function (PSF). The algorithm is applied to design a binary phase-only DOE that superresolves the axial PSF with controlled increase in axial sidelobes. An 11-zone DOE is identified that axially narrows the PSF central lobe by 29% while maintaining the sidelobe intensity at or below 52% of the peak intensity. This DOE could improve the resolution achievable in several applications without significantly complicating the optical system.
Using SPARK as a Solver for Modelica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Wetter, Michael; Haves, Philip
Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less
Smartphone-based grading of apple quality
NASA Astrophysics Data System (ADS)
Li, Xianglin; Li, Ting
2018-02-01
Apple quality grading is a critical issue in apple industry which is one economical pillar of many countries. Artificial grading is inefficient and of poor accuracy. Here we proposed to develop a portable, convenient, real-time, and low cost method aimed at grading apple. Color images of the apples were collected with a smartphone and the grade of sampled apple was assessed by a customized smartphone app, which offered the functions translating RGB color values of the apple to color grade and translating the edge of apple image to weight grade. The algorithms are based on modeling with a large number of apple image at different grades. The apple grade data evaluated by the smartphone are in accordance with the actual data. This study demonstrated the potential of smart phone in apple quality grading/online monitoring at gathering and transportation stage for apple industry.
Leveraging health social networking communities in translational research.
Webster, Yue W; Dow, Ernst R; Koehler, Jacob; Gudivada, Ranga C; Palakal, Mathew J
2011-08-01
Health social networking communities are emerging resources for translational research. We have designed and implemented a framework called HyGen, which combines Semantic Web technologies, graph algorithms and user profiling to discover and prioritize novel associations across disciplines. This manuscript focuses on the key strategies developed to overcome the challenges in handling patient-generated content in Health social networking communities. Heuristic and quantitative evaluations were carried out in colorectal cancer. The results demonstrate the potential of our approach to bridge silos and to identify hidden links among clinical observations, drugs, genes and diseases. In Amyotrophic Lateral Sclerosis case studies, HyGen has identified 15 of the 20 published disease genes. Additionally, HyGen has highlighted new candidates for future investigations, as well as a scientifically meaningful connection between riluzole and alcohol abuse. Copyright © 2011 Elsevier Inc. All rights reserved.
Automatic Dictionary Expansion Using Non-parallel Corpora
NASA Astrophysics Data System (ADS)
Rapp, Reinhard; Zock, Michael
Automatically generating bilingual dictionaries from parallel, manually translated texts is a well established technique that works well in practice. However, parallel texts are a scarce resource. Therefore, it is desirable also to be able to generate dictionaries from pairs of comparable monolingual corpora. For most languages, such corpora are much easier to acquire, and often in considerably larger quantities. In this paper we present the implementation of an algorithm which exploits such corpora with good success. Based on the assumption that the co-occurrence patterns between different languages are related, it expands a small base lexicon. For improved performance, it also realizes a novel interlingua approach. That is, if corpora of more than two languages are available, the translations from one language to another can be determined not only directly, but also indirectly via a pivot language.
Drug design and discovery: translational biomedical science varies among countries.
Weaver, Ian N; Weaver, Donald F
2013-10-01
Drug design and discovery is an innovation process that translates the outcomes of fundamental biomedical research into therapeutics that are ultimately made available to people with medical disorders in many countries throughout the world. To identify which nations succeed, exceed, or fail at the drug design/discovery endeavor--more specifically, which countries, within the context of their national size and wealth, are "pulling their weight" when it comes to developing medications targeting the myriad of diseases that afflict humankind--we compiled and analyzed a comprehensive survey of all new drugs (small molecular entities and biologics) approved annually throughout the world over the 20-year period from 1991 to 2010. Based upon this analysis, we have devised prediction algorithms to ascertain which countries are successful (or not) in contributing to the worldwide need for effective new therapeutics. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tornga, Shawn R.
The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.
Ottaviani, Ana Carolina; Orlandi, Fabiana de Souza
2016-01-01
Losses can be conceptualized as cognitive and affective responses to individual sorrows, characterized by brooding, yearning, disbelief and stunned feelings, being clinically significant in chronic diseases. The aim of the study was to translate, culturally adapt and validate the Kidney Disease Loss Scale into Portuguese. Validation study involving the steps recommended in the literature for healthcare instruments: initial translation, synthesis of translations, back translation, review by a committee of judges and pretest. The scale was translated and adapted to the Portuguese language, being quick and easy to application. The reliability and reproducibility showed satisfactory values. Factor analysis indicated a factor that explains 59.7% of the losses construct. The Kidney Disease Loss Scale was translated, adapted and validated for the Brazilian context, allowing future studies of losses and providing tools for the professionals working in dialysis centers for assistance to people with chronic kidney disease. As perdas podem ser conceituadas como respostas cognitivas e afetivas para tristezas individuais, caracterizadas pelo remoer, anseio, descrença e sentimentos atordoados, sendo clinicamente significativa em doenças crônicas. O objetivo do estudo foi traduzir, adaptar culturalmente e validar o Kidney Disease Loss Scale para a língua portuguesa. Estudo de validação envolveu as etapas preconizadas na literatura internacional para instrumentos da área de saúde: tradução inicial, síntese das traduções, retrotradução, revisão por um comitê de juízes, pré-teste e avaliação das propriedades psicométricas. A escala foi traduzida e adaptada para o idioma português, sendo de fácil e rápida aplicação. A confiabilidade e a reprodutibilidade apresentaram valores satisfatórios. A análise fatorial indicou um fator que explica 59,7% do constructo de perdas. A Escala de Perdas referente à Doença Renal foi traduzida, adaptada e validada para o contexto brasileiro, permitindo estudos futuros sobre perdas e instrumentalizando os profissionais atuantes em centros de diálise para assistência à pessoa com doença renal crônica.
Semantics of directly manipulating spatializations.
Hu, Xinran; Bradel, Lauren; Maiti, Dipayan; House, Leanna; North, Chris; Leman, Scotland
2013-12-01
When high-dimensional data is visualized in a 2D plane by using parametric projection algorithms, users may wish to manipulate the layout of the data points to better reflect their domain knowledge or to explore alternative structures. However, few users are well-versed in the algorithms behind the visualizations, making parameter tweaking more of a guessing game than a series of decisive interactions. Translating user interactions into algorithmic input is a key component of Visual to Parametric Interaction (V2PI) [13]. Instead of adjusting parameters, users directly move data points on the screen, which then updates the underlying statistical model. However, we have found that some data points that are not moved by the user are just as important in the interactions as the data points that are moved. Users frequently move some data points with respect to some other 'unmoved' data points that they consider as spatially contextual. However, in current V2PI interactions, these points are not explicitly identified when directly manipulating the moved points. We design a richer set of interactions that makes this context more explicit, and a new algorithm and sophisticated weighting scheme that incorporates the importance of these unmoved data points into V2PI.
NASA Astrophysics Data System (ADS)
Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael
2011-02-01
Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.
Guan, Hao; Liu, Tao; Jiang, Jiyang; Tao, Dacheng; Zhang, Jicong; Niu, Haijun; Zhu, Wanlin; Wang, Yilong; Cheng, Jian; Kochan, Nicole A.; Brodaty, Henry; Sachdev, Perminder; Wen, Wei
2017-01-01
Amnestic MCI (aMCI) and non-amnestic MCI (naMCI) are considered to differ in etiology and outcome. Accurately classifying MCI into meaningful subtypes would enable early intervention with targeted treatment. In this study, we employed structural magnetic resonance imaging (MRI) for MCI subtype classification. This was carried out in a sample of 184 community-dwelling individuals (aged 73–85 years). Cortical surface based measurements were computed from longitudinal and cross-sectional scans. By introducing a feature selection algorithm, we identified a set of discriminative features, and further investigated the temporal patterns of these features. A voting classifier was trained and evaluated via 10 iterations of cross-validation. The best classification accuracies achieved were: 77% (naMCI vs. aMCI), 81% (aMCI vs. cognitively normal (CN)) and 70% (naMCI vs. CN). The best results for differentiating aMCI from naMCI were achieved with baseline features. Hippocampus, amygdala and frontal pole were found to be most discriminative for classifying MCI subtypes. Additionally, we observed the dynamics of classification of several MRI biomarkers. Learning the dynamics of atrophy may aid in the development of better biomarkers, as it may track the progression of cognitive impairment. PMID:29085292
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyakh, Dmitry I.
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
2017-03-02
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-12-01
The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.
Mora, Juan David Sandino; Hurtado, Darío Amaya; Sandoval, Olga Lucía Ramos
2016-01-01
Background: Reported cases of uncontrolled use of pesticides and its produced effects by direct or indirect exposition, represent a high risk for human health. Therefore, in this paper, it is shown the results of the development and execution of an algorithm that predicts the possible effects in endocrine system in Fisher 344 (F344) rats, occasioned by ingestion of malathion. Methods: It was referred to ToxRefDB database in which different case studies in F344 rats exposed to malathion were collected. The experimental data were processed using Naïve Bayes (NB) machine learning classifier, which was subsequently optimized using genetic algorithms (GAs). The model was executed in an application with a graphical user interface programmed in C#. Results: There was a tendency to suffer bigger alterations, increasing levels in the parathyroid gland in dosages between 4 and 5 mg/kg/day, in contrast to the thyroid gland for doses between 739 and 868 mg/kg/day. It was showed a greater resistance for females to contract effects on the endocrine system by the ingestion of malathion. Females were more susceptible to suffer alterations in the pituitary gland with exposure times between 3 and 6 months. Conclusions: The prediction model based on NB classifiers allowed to analyze all the possible combinations of the studied variables and improving its accuracy using GAs. Excepting the pituitary gland, females demonstrated better resistance to contract effects by increasing levels on the rest of endocrine system glands. PMID:27833725
Mora, Juan David Sandino; Hurtado, Darío Amaya; Sandoval, Olga Lucía Ramos
2016-01-01
Reported cases of uncontrolled use of pesticides and its produced effects by direct or indirect exposition, represent a high risk for human health. Therefore, in this paper, it is shown the results of the development and execution of an algorithm that predicts the possible effects in endocrine system in Fisher 344 (F344) rats, occasioned by ingestion of malathion. It was referred to ToxRefDB database in which different case studies in F344 rats exposed to malathion were collected. The experimental data were processed using Naïve Bayes (NB) machine learning classifier, which was subsequently optimized using genetic algorithms (GAs). The model was executed in an application with a graphical user interface programmed in C#. There was a tendency to suffer bigger alterations, increasing levels in the parathyroid gland in dosages between 4 and 5 mg/kg/day, in contrast to the thyroid gland for doses between 739 and 868 mg/kg/day. It was showed a greater resistance for females to contract effects on the endocrine system by the ingestion of malathion. Females were more susceptible to suffer alterations in the pituitary gland with exposure times between 3 and 6 months. The prediction model based on NB classifiers allowed to analyze all the possible combinations of the studied variables and improving its accuracy using GAs. Excepting the pituitary gland, females demonstrated better resistance to contract effects by increasing levels on the rest of endocrine system glands.
Transfer Learning in Integrated Cognitive Systems
2010-09-01
Psychology Press. 2. Falkenhainer, B ., Forbus, K . D., and Gentner, D. (1989). The Structure-mapping Engine: Algorithm and Examples. Artificial...Kaps, A.; Lemcke, K .; Mannhaupt, G.; Pfeiffer, F.; Schuller, C.; Stocker, S. & Weil, B ., "MIPS: A Database for Genomes and Protein Sequences...NAME OF RESPONSIBLE PERSON Deborah A. Cerino a. REPORT U b . ABSTRACT U c. THIS PAGE U 19b. TELEPHONE NUMBER (Include area code) N/A
Development of robots and application to industrial processes
NASA Technical Reports Server (NTRS)
Palm, W. J.; Liscano, R.
1984-01-01
An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.
Low Frequency Flats for Imaging Cameras on the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Kossakowski, Diana; Avila, Roberto J.; Borncamp, David; Grogin, Norman A.
2017-01-01
We created a revamped Low Frequency Flat (L-Flat) algorithm for the Hubble Space Telescope (HST) and all of its imaging cameras. The current program that makes these calibration files does not compile on modern computer systems and it requires translation to Python. We took the opportunity to explore various methods that reduce the scatter of photometric observations using chi-squared optimizers along with Markov Chain Monte Carlo (MCMC). We created simulations to validate the algorithms and then worked with the UV photometry of the globular cluster NGC6681 to update the calibration files for the Advanced Camera for Surveys (ACS) and Solar Blind Channel (SBC). The new software was made for general usage and therefore can be applied to any of the current imaging cameras on HST.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
Particle Filtering with Region-based Matching for Tracking of Partially Occluded and Scaled Targets*
Nakhmani, Arie; Tannenbaum, Allen
2012-01-01
Visual tracking of arbitrary targets in clutter is important for a wide range of military and civilian applications. We propose a general framework for the tracking of scaled and partially occluded targets, which do not necessarily have prominent features. The algorithm proposed in the present paper utilizes a modified normalized cross-correlation as the likelihood for a particle filter. The algorithm divides the template, selected by the user in the first video frame, into numerous patches. The matching process of these patches by particle filtering allows one to handle the target’s occlusions and scaling. Experimental results with fixed rectangular templates show that the method is reliable for videos with nonstationary, noisy, and cluttered background, and provides accurate trajectories in cases of target translation, scaling, and occlusion. PMID:22506088
Making extreme computations possible with virtual machines
NASA Astrophysics Data System (ADS)
Reuter, J.; Chokoufe Nejad, B.; Ohl, T.
2016-10-01
State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.
Bernstam, Elmer V.; Hersh, William R.; Johnson, Stephen B.; Chute, Christopher G.; Nguyen, Hien; Sim, Ida; Nahm, Meredith; Weiner, Mark; Miller, Perry; DiLaura, Robert P.; Overcash, Marc; Lehmann, Harold P.; Eichmann, David; Athey, Brian D.; Scheuermann, Richard H.; Anderson, Nick; Starren, Justin B.; Harris, Paul A.; Smith, Jack W.; Barbour, Ed; Silverstein, Jonathan C.; Krusch, David A.; Nagarajan, Rakesh; Becich, Michael J.
2010-01-01
Clinical and translational research increasingly requires computation. Projects may involve multiple computationally-oriented groups including information technology (IT) professionals, computer scientists and biomedical informaticians. However, many biomedical researchers are not aware of the distinctions among these complementary groups, leading to confusion, delays and sub-optimal results. Although written from the perspective of clinical and translational science award (CTSA) programs within academic medical centers, the paper addresses issues that extend beyond clinical and translational research. The authors describe the complementary but distinct roles of operational IT, research IT, computer science and biomedical informatics using a clinical data warehouse as a running example. In general, IT professionals focus on technology. The authors distinguish between two types of IT groups within academic medical centers: central or administrative IT (supporting the administrative computing needs of large organizations) and research IT (supporting the computing needs of researchers). Computer scientists focus on general issues of computation such as designing faster computers or more efficient algorithms, rather than specific applications. In contrast, informaticians are concerned with data, information and knowledge. Biomedical informaticians draw on a variety of tools, including but not limited to computers, to solve information problems in health care and biomedicine. The paper concludes with recommendations regarding administrative structures that can help to maximize the benefit of computation to biomedical research within academic health centers. PMID:19550198
Mortimer, Duncan; Segal, Leonie
2008-01-01
Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.
Optical Coherence Tomography (OCT) Device Independent Intraretinal Layer Segmentation
Ehnes, Alexander; Wenner, Yaroslava; Friedburg, Christoph; Preising, Markus N.; Bowl, Wadim; Sekundo, Walter; zu Bexten, Erdmuthe Meyer; Stieger, Knut; Lorenz, Birgit
2014-01-01
Purpose To develop and test an algorithm to segment intraretinal layers irrespectively of the actual Optical Coherence Tomography (OCT) device used. Methods The developed algorithm is based on the graph theory optimization. The algorithm's performance was evaluated against that of three expert graders for unsigned boundary position difference and thickness measurement of a retinal layer group in 50 and 41 B-scans, respectively. Reproducibility of the algorithm was tested in 30 C-scans of 10 healthy subjects each with the Spectralis and the Stratus OCT. Comparability between different devices was evaluated in 84 C-scans (volume or radial scans) obtained from 21 healthy subjects, two scans per subject with the Spectralis OCT, and one scan per subject each with the Stratus OCT and the RTVue-100 OCT. Each C-scan was segmented and the mean thickness for each retinal layer in sections of the early treatment of diabetic retinopathy study (ETDRS) grid was measured. Results The algorithm was able to segment up to 11 intraretinal layers. Measurements with the algorithm were within the 95% confidence interval of a single grader and the difference was smaller than the interindividual difference between the expert graders themselves. The cross-device examination of ETDRS-grid related layer thicknesses highly agreed between the three OCT devices. The algorithm correctly segmented a C-scan of a patient with X-linked retinitis pigmentosa. Conclusions The segmentation software provides device-independent, reliable, and reproducible analysis of intraretinal layers, similar to what is obtained from expert graders. Translational Relevance Potential application of the software includes routine clinical practice and multicenter clinical trials. PMID:24820053
2014-01-01
Background mRNA translation involves simultaneous movement of multiple ribosomes on the mRNA and is also subject to regulatory mechanisms at different stages. Translation can be described by various codon-based models, including ODE, TASEP, and Petri net models. Although such models have been extensively used, the overlap and differences between these models and the implications of the assumptions of each model has not been systematically elucidated. The selection of the most appropriate modelling framework, and the most appropriate way to develop coarse-grained/fine-grained models in different contexts is not clear. Results We systematically analyze and compare how different modelling methodologies can be used to describe translation. We define various statistically equivalent codon-based simulation algorithms and analyze the importance of the update rule in determining the steady state, an aspect often neglected. Then a novel probabilistic Boolean network (PBN) model is proposed for modelling translation, which enjoys an exact numerical solution. This solution matches those of numerical simulation from other methods and acts as a complementary tool to analytical approximations and simulations. The advantages and limitations of various codon-based models are compared, and illustrated by examples with real biological complexities such as slow codons, premature termination and feedback regulation. Our studies reveal that while different models gives broadly similiar trends in many cases, important differences also arise and can be clearly seen, in the dependence of the translation rate on different parameters. Furthermore, the update rule affects the steady state solution. Conclusions The codon-based models are based on different levels of abstraction. Our analysis suggests that a multiple model approach to understanding translation allows one to ascertain which aspects of the conclusions are robust with respect to the choice of modelling methodology, and when (and why) important differences may arise. This approach also allows for an optimal use of analysis tools, which is especially important when additional complexities or regulatory mechanisms are included. This approach can provide a robust platform for dissecting translation, and results in an improved predictive framework for applications in systems and synthetic biology. PMID:24576337
Model-based Bayesian signal extraction algorithm for peripheral nerves
NASA Astrophysics Data System (ADS)
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.
Kariuki, Jacob K; Gona, Philimon; Leveille, Suzanne G; Stuart-Shor, Eileen M; Hayman, Laura L; Cromwell, Jerry
2018-06-01
The non-lab Framingham algorithm, which substitute body mass index for lipids in the laboratory based (lab-based) Framingham algorithm, has been validated among African Americans (AAs). However, its cost-effectiveness and economic tradeoffs have not been evaluated. This study examines the incremental cost-effectiveness ratio (ICER) of two cardiovascular disease (CVD) prevention programs guided by the non-lab versus lab-based Framingham algorithm. We simulated the World Health Organization CVD prevention guidelines on a cohort of 2690 AA participants in the Atherosclerosis Risk in Communities (ARIC) cohort. Costs were estimated using Medicare fee schedules (diagnostic tests, drugs & visits), Bureau of Labor Statistics (RN wages), and estimates for managing incident CVD events. Outcomes were assumed to be true positive cases detected at a data driven treatment threshold. Both algorithms had the best balance of sensitivity/specificity at the moderate risk threshold (>10% risk). Over 12years, 82% and 77% of 401 incident CVD events were accurately predicted via the non-lab and lab-based Framingham algorithms, respectively. There were 20 fewer false negative cases in the non-lab approach translating into over $900,000 in savings over 12years. The ICER was -$57,153 for every extra CVD event prevented when using the non-lab algorithm. The approach guided by the non-lab Framingham strategy dominated the lab-based approach with respect to both costs and predictive ability. Consequently, the non-lab Framingham algorithm could potentially provide a highly effective screening tool at lower cost to address the high burden of CVD especially among AA and in resource-constrained settings where lab tests are unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Langowski, M. P.; von Savigny, C.; Burrows, J. P.; Rozanov, V. V.; Dunker, T.; Hoppe, U.-P.; Sinnhuber, M.; Aikin, A. C.
2016-01-01
An algorithm has been developed for the retrieval of sodium atom (Na) number density on a latitude and altitude grid from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) limb measurements of the Na resonance fluorescence. The results are obtained between 50 and 150 km altitude and the resulting global seasonal variations of Na are analyzed. The retrieval approach is adapted from that used for the retrieval of magnesium atom (Mg) and magnesium ion (Mg+) number density profiles recently reported by Langowski et al. (2014). Monthly mean values of Na are presented as a function of altitude and latitude. This data set was retrieved from the 4 years of spectroscopic limb data of the SCIAMACHY mesosphere and lower thermosphere (MLT) measurement mode (mid-2008 to early 2012). The Na layer has a nearly constant peak altitude of 90-93 km for all latitudes and seasons, and has a full width at half maximum of 5-15 km. Small but significant seasonal variations in Na are identified for latitudes less than 40°, where the maximum Na number densities are 3000-4000 atoms cm-3. At middle to high latitudes a clear seasonal variation with a winter maximum of up to 6000 atoms cm-3 is observed. The high latitudes, which are only measured in the summer hemisphere, have lower number densities, with peak densities being approximately 1000 Na atoms cm-3. The full width at half maximum of the peak varies strongly at high latitudes and is 5 km near the polar summer mesopause, while it exceeds 10 km at lower latitudes. In summer the Na atom concentration at high latitudes and at altitudes below 88 km is significantly smaller than that at middle latitudes. The results are compared with other observations and models and there is overall a good agreement with these.
Validation to Portuguese of the Scale of Student Satisfaction and Self-Confidence in Learning1
Almeida, Rodrigo Guimarães dos Santos; Mazzo, Alessandra; Martins, José Carlos Amado; Baptista, Rui Carlos Negrão; Girão, Fernanda Berchelli; Mendes, Isabel Amélia Costa
2015-01-01
Objective: translate and validate to Portuguese the Scale of Student Satisfaction and Self-Confidence in Learning. Material and Methods: methodological translation and validation study of a research tool. After following all steps of the translation process, for the validation process, the event III Workshop Brazil - Portugal: Care Delivery to Critical Patients was created, promoted by one Brazilian and another Portuguese teaching institution. Results: 103 nurses participated. As to the validity and reliability of the scale, the correlation pattern between the variables, the sampling adequacy test (Kaiser-Meyer-Olkin) and the sphericity test (Bartlett) showed good results. In the exploratory factorial analysis (Varimax), item 9 behaved better in factor 1 (Satisfaction) than in factor 2 (Self-confidence in learning). The internal consistency (Cronbach's alpha) showed coefficients of 0.86 in factor 1 with six items and 0.77 for factor 2 with 07 items. Conclusion: in Portuguese this tool was called: Escala de Satisfação de Estudantes e Autoconfiança na Aprendizagem. The results found good psychometric properties and a good potential use. The sampling size and specificity are limitations of this study, but future studies will contribute to consolidate the validity of the scale and strengthen its potential use. PMID:26625990
Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.
2016-01-01
Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613
Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S
2016-08-01
Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.
Nelson, Tammie; Fernandez-Alberti, Sebastian; Roitberg, Adrian E; Tretiak, Sergei
2014-04-15
To design functional photoactive materials for a variety of technological applications, researchers need to understand their electronic properties in detail and have ways to control their photoinduced pathways. When excited by photons of light, organic conjugated materials (OCMs) show dynamics that are often characterized by large nonadiabatic (NA) couplings between multiple excited states through a breakdown of the Born-Oppenheimer (BO) approximation. Following photoexcitation, various nonradiative intraband relaxation pathways can lead to a number of complex processes. Therefore, computational simulation of nonadiabatic molecular dynamics is an indispensable tool for understanding complex photoinduced processes such as internal conversion, energy transfer, charge separation, and spatial localization of excitons. Over the years, we have developed a nonadiabatic excited-state molecular dynamics (NA-ESMD) framework that efficiently and accurately describes photoinduced phenomena in extended conjugated molecular systems. We use the fewest-switches surface hopping (FSSH) algorithm to treat quantum transitions among multiple adiabatic excited state potential energy surfaces (PESs). Extended molecular systems often contain hundreds of atoms and involve large densities of excited states that participate in the photoinduced dynamics. We can achieve an accurate description of the multiple excited states using the configuration interaction single (CIS) formalism with a semiempirical model Hamiltonian. Analytical techniques allow the trajectory to be propagated "on the fly" using the complete set of NA coupling terms and remove computational bottlenecks in the evaluation of excited-state gradients and NA couplings. Furthermore, the use of state-specific gradients for propagation of nuclei on the native excited-state PES eliminates the need for simplifications such as the classical path approximation (CPA), which only uses ground-state gradients. Thus, the NA-ESMD methodology offers a computationally tractable route for simulating hundreds of atoms on ~10 ps time scales where multiple coupled excited states are involved. In this Account, we review recent developments in the NA-ESMD modeling of photoinduced dynamics in extended conjugated molecules involving multiple coupled electronic states. We have successfully applied the outlined NA-ESMD framework to study ultrafast conformational planarization in polyfluorenes where the rate of torsional relaxation can be controlled based on the initial excitation. With the addition of the state reassignment algorithm to identify instances of unavoided crossings between noninteracting PESs, NA-ESMD can now be used to study systems in which these so-called trivial unavoided crossings are expected to predominate. We employ this technique to analyze the energy transfer between poly(phenylene vinylene) (PPV) segments where conformational fluctuations give rise to numerous instances of unavoided crossings leading to multiple pathways and complex energy transfer dynamics that cannot be described using a simple Förster model. In addition, we have investigated the mechanism of ultrafast unidirectional energy transfer in dendrimers composed of poly(phenylene ethynylene) (PPE) chromophores and have demonstrated that differential nuclear motion favors downhill energy transfer in dendrimers. The use of native excited-state gradients allows us to observe this feature.
NASA Astrophysics Data System (ADS)
Bakker, Ronald J.
2018-06-01
The program AqSo_NaCl has been developed to calculate pressure - molar volume - temperature - composition (p-V-T-x) properties, enthalpy, and heat capacity of the binary H2O-NaCl system. The algorithms are designed in BASIC within the Xojo programming environment, and can be operated as stand-alone project with Macintosh-, Windows-, and Unix-based operating systems. A series of ten self-instructive interfaces (modules) are developed to calculate fluid inclusion properties and pore fluid properties. The modules may be used to calculate properties of pure NaCl, the halite-liquidus, the halite-vapourus, dew-point and bubble-point curves (liquid-vapour), critical point, and SLV solid-liquid-vapour curves at temperatures above 0.1 °C (with halite) and below 0.1 °C (with ice or hydrohalite). Isochores of homogeneous fluids and unmixed fluids in a closed system can be calculated and exported to a.txt file. Isochores calculated for fluid inclusions can be corrected according to the volumetric properties of quartz. Microthermometric data, i.e. dissolution temperatures and homogenization temperatures, can be used to calculated bulk fluid properties of fluid inclusions. Alternatively, in the absence of total homogenization temperature the volume fraction of the liquid phase in fluid inclusions can be used to obtain bulk properties.
Oscillatory regulation of Hes1: Discrete stochastic delay modelling and simulation.
Barrio, Manuel; Burrage, Kevin; Leier, André; Tian, Tianhai
2006-09-08
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Modeling Syntax for Parsing and Translation
2003-12-15
20 CHAPTER 2. MONOLINGUAL PROBABILISTIC PARSING a the D cat snake D S O chased S O ran SS Mary O Figure 2.1: Part of a dictionary . the cat S chased S O...along with their training algorithms: a monolingual gen- erative model of sentence structure, and a model of the relationship between the structure of a...tasks of monolingual parsing and word-level bilingual corpus alignment, they are demonstrated in two additional applications. First, a new statistical
2007-02-01
determined by its neighbors’ correspondence. Thus, the algorithm consists of four main steps: ICP registration of the base and nipple regions of the...the nipple and the base of the breast, as a location for accurately determining initial correspondence. However, due to the compression, the nipple of...cloud) is translated and lies at a different angle than the nipple of the pendant breast (the source point cloud). By minimizing the average distance
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Niemeyer, Frank; Simon, Ulrich; Wehner, Tim
2013-09-06
Numerical models of secondary fracture healing are based on mechanoregulatory algorithms that use distortional strain alone or in combination with either dilatational strain or fluid velocity as determining stimuli for tissue differentiation and development. Comparison of these algorithms has previously suggested that healing processes under torsional rotational loading can only be properly simulated by considering fluid velocity and deviatoric strain as the regulatory stimuli. We hypothesize that sufficient calibration on uncertain input parameters will enhance our existing model, which uses distortional and dilatational strains as determining stimuli, to properly simulate fracture healing under various loading conditions including also torsional rotation. Therefore, we minimized the difference between numerically simulated and experimentally measured courses of interfragmentary movements of two axial compressive cases and two shear load cases (torsional and translational) by varying several input parameter values within their predefined bounds. The calibrated model was then qualitatively evaluated on the ability to predict physiological changes of spatial and temporal tissue distributions, based on respective in vivo data. Finally, we corroborated the model on five additional axial compressive and one asymmetrical bending load case. We conclude that our model, using distortional and dilatational strains as determining stimuli, is able to simulate fracture-healing processes not only under axial compression and torsional rotation but also under translational shear and asymmetrical bending loading conditions.
LDA boost classification: boosting by topics
NASA Astrophysics Data System (ADS)
Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li
2012-12-01
AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.
Divalent cation shrinks DNA but inhibits its compaction with trivalent cation.
Tongu, Chika; Kenmotsu, Takahiro; Yoshikawa, Yuko; Zinchenko, Anatoly; Chen, Ning; Yoshikawa, Kenichi
2016-05-28
Our observation reveals the effects of divalent and trivalent cations on the higher-order structure of giant DNA (T4 DNA 166 kbp) by fluorescence microscopy. It was found that divalent cations, Mg(2+) and Ca(2+), inhibit DNA compaction induced by a trivalent cation, spermidine (SPD(3+)). On the other hand, in the absence of SPD(3+), divalent cations cause the shrinkage of DNA. As the control experiment, we have confirmed the minimum effect of monovalent cation, Na(+) on the DNA higher-order structure. We interpret the competition between 2+ and 3+ cations in terms of the change in the translational entropy of the counterions. For the compaction with SPD(3+), we consider the increase in translational entropy due to the ion-exchange of the intrinsic monovalent cations condensing on a highly charged polyelectrolyte, double-stranded DNA, by the 3+ cations. In contrast, the presence of 2+ cation decreases the gain of entropy contribution by the ion-exchange between monovalent and 3+ ions.
Estimating the chance of success in IVF treatment using a ranking algorithm.
Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna
2015-09-01
In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.
Mao, Yanhui; Fornara, Ferdinando; Manca, Sara; Bonnes, Mirilia; Bonaiuto, Marino
2015-09-01
This paper concerns people's assessment of their neighborhood of residence in a Chinese urban context. The aim of the study was to verify the factorial structure and the reliability of two instruments originally developed and validated in Italy (the full versions of the Perceived Residential Environment Quality Indicators [PREQIs] and of the Neighborhood Attachment Scale [NAS]) in a different cultural and linguistic context. The instruments consist of 11 scales measuring the PREQIs and one scale measuring neighborhood attachment (NA). The PREQIs scales include items covering four macroevaluative domains of residential environment quality: architectural and urban planning aspects (three scales: Architectural and Town-planning Space, Organization of Accessibility and Roads, Green Areas), sociorelational aspects (one scale: People and Social Relations), functional aspects (four scales: Welfare Services, Recreational Services, Commercial Services, and Transport Services), and contextual aspects (three scales: Pace of Life, Environmental Health, Upkeep and Care). The PREQIs and NAS were included in a self-report questionnaire, which had been translated and back-translated from English to Chinese, and was then administered to 340 residents in six districts (differing along various features) of a highly urbanized context in China, the city of Chongqing. Results confirmed the factorial structure of the scales and demonstrated good internal consistency of the indicators, thus reaffirming the results of previous studies carried out in Western urban contexts. The indicators tapping the neighborhood's contextual aspects (i.e., pace of life, environmental health, and upkeep) emerged as most correlated to NA. © 2015 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
The design and hardware implementation of a low-power real-time seizure detection algorithm
NASA Astrophysics Data System (ADS)
Raghunathan, Shriram; Gupta, Sumeet K.; Ward, Matthew P.; Worth, Robert M.; Roy, Kaushik; Irazoqui, Pedro P.
2009-10-01
Epilepsy affects more than 1% of the world's population. Responsive neurostimulation is emerging as an alternative therapy for the 30% of the epileptic patient population that does not benefit from pharmacological treatment. Efficient seizure detection algorithms will enable closed-loop epilepsy prostheses by stimulating the epileptogenic focus within an early onset window. Critically, this is expected to reduce neuronal desensitization over time and lead to longer-term device efficacy. This work presents a novel event-based seizure detection algorithm along with a low-power digital circuit implementation. Hippocampal depth-electrode recordings from six kainate-treated rats are used to validate the algorithm and hardware performance in this preliminary study. The design process illustrates crucial trade-offs in translating mathematical models into hardware implementations and validates statistical optimizations made with empirical data analyses on results obtained using a real-time functioning hardware prototype. Using quantitatively predicted thresholds from the depth-electrode recordings, the auto-updating algorithm performs with an average sensitivity and selectivity of 95.3 ± 0.02% and 88.9 ± 0.01% (mean ± SEα = 0.05), respectively, on untrained data with a detection delay of 8.5 s [5.97, 11.04] from electrographic onset. The hardware implementation is shown feasible using CMOS circuits consuming under 350 nW of power from a 250 mV supply voltage from simulations on the MIT 180 nm SOI process.
Text extraction from images in the wild using the Viola-Jones algorithm
NASA Astrophysics Data System (ADS)
Saabna, Raid M.; Zingboim, Eran
2018-04-01
Text Localization and extraction is an important issue in modern applications of computer vision. Applications such as reading and translating texts in the wild or from videos are among the many applications that can benefit results of this field. In this work, we adopt the well-known Viola-Jones algorithm to enable text extraction and localization from images in the wild. The Viola-Jones is an efficient, and a fast image-processing algorithm originally used for face detection. Based on some resemblance between text and face detection tasks in the wild, we have modified the viola-jones to detect regions of interest where text may be localized. In the proposed approach, some modification to the HAAR like features and a semi-automatic process of data set generating and manipulation were presented to train the algorithm. A process of sliding windows with different sizes have been used to scan the image for individual letters and letter clusters existence. A post processing step is used in order to combine the detected letters into words and to remove false positives. The novelty of the presented approach is using the strengths of a modified Viola-Jones algorithm to identify many different objects representing different letters and clusters of similar letters and later combine them into words of varying lengths. Impressive results were obtained on the ICDAR contest data sets.
Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena
2016-01-01
Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491