Assessment of the cPAS-based BGISEQ-500 platform for metagenomic sequencing.
Fang, Chao; Zhong, Huanzi; Lin, Yuxiang; Chen, Bing; Han, Mo; Ren, Huahui; Lu, Haorong; Luber, Jacob M; Xia, Min; Li, Wangsheng; Stein, Shayna; Xu, Xun; Zhang, Wenwei; Drmanac, Radoje; Wang, Jian; Yang, Huanming; Hammarström, Lennart; Kostic, Aleksandar D; Kristiansen, Karsten; Li, Junhua
2018-03-01
More extensive use of metagenomic shotgun sequencing in microbiome research relies on the development of high-throughput, cost-effective sequencing. Here we present a comprehensive evaluation of the performance of the new high-throughput sequencing platform BGISEQ-500 for metagenomic shotgun sequencing and compare its performance with that of 2 Illumina platforms. Using fecal samples from 20 healthy individuals, we evaluated the intra-platform reproducibility for metagenomic sequencing on the BGISEQ-500 platform in a setup comprising 8 library replicates and 8 sequencing replicates. Cross-platform consistency was evaluated by comparing 20 pairwise replicates on the BGISEQ-500 platform vs the Illumina HiSeq 2000 platform and the Illumina HiSeq 4000 platform. In addition, we compared the performance of the 2 Illumina platforms against each other. By a newly developed overall accuracy quality control method, an average of 82.45 million high-quality reads (96.06% of raw reads) per sample, with 90.56% of bases scoring Q30 and above, was obtained using the BGISEQ-500 platform. Quantitative analyses revealed extremely high reproducibility between BGISEQ-500 intra-platform replicates. Cross-platform replicates differed slightly more than intra-platform replicates, yet a high consistency was observed. Only a low percentage (2.02%-3.25%) of genes exhibited significant differences in relative abundance comparing the BGISEQ-500 and HiSeq platforms, with a bias toward genes with higher GC content being enriched on the HiSeq platforms. Our study provides the first set of performance metrics for human gut metagenomic sequencing data using BGISEQ-500. The high accuracy and technical reproducibility confirm the applicability of the new platform for metagenomic studies, though caution is still warranted when combining metagenomic data from different platforms.
Micromagnetics on high-performance workstation and mobile computational platforms
NASA Astrophysics Data System (ADS)
Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.
2015-05-01
The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
High-performance silicon photonics technology for telecommunications applications.
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-04-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.
High-performance silicon photonics technology for telecommunications applications
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-01-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge–based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge–based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications. PMID:27877659
High-performance silicon photonics technology for telecommunications applications
NASA Astrophysics Data System (ADS)
Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi
2014-04-01
By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.
Microfabricated Nickel Based Sensors for Hostile and High Pressure Environments
NASA Astrophysics Data System (ADS)
Holt, Christopher Michael Bjustrom
This thesis outlines the development of two platforms for integrating microfabricated sensors with high pressure feedthroughs for application in hostile high temperature high pressure environments. An application in oil well production logging is explored and two sensors were implemented with these platforms for application in an oil well. The first platform developed involved microfabrication directly onto a cut and polished high pressure feedthrough. This technique enables a system that is more robust than the wire bonded silicon die technique used for MEMS integration in pressure sensors. Removing wire bonds from the traditional MEMS package allows for direct interface of a microfabricated sensor with a hostile high pressure fluid environment which is not currently possible. During the development of this platform key performance metrics included pressure testing to 70MPa and temperature cycling from 20°C to 200°C. This platform enables electronics integration with a variety of microfabricated electrical and thermal based sensors which can be immersed within the oil well environment. The second platform enabled free space fabrication of nickel microfabricated devices onto an array of pins using a thick tin sacrificial layer. This technique allowed microfabrication of metal MEMS that are released by distances of 1cm from their substrate. This method is quite flexible and allows for fabrication to be done on any pin array substrate regardless of surface quality. Being able to place released MEMS sensors directly onto traditional style circuit boards, ceramic circuit boards, electrical connectors, ribbon cables, pin headers, or high pressure feedthroughs greatly improves the variety of possible applications and reduces fabrication costs. These two platforms were then used to fabricate thermal conductivity sensors that showed excellent performance for distinguishing between oil, water, and gas phases. Testing was conducted at various flow rates and performance of the released platform was shown to be better than the performance seen in the anchored sensors while both platforms were significantly better than a simply fabricated wrapped wire sensor. The anchored platform was also used to demonstrate a traditional capacitance based fluid dielectric sensor which was found to work similarly to conventional commercial capacitance probes while being significantly smaller in size.
Code of Federal Regulations, 2012 CFR
2012-10-01
... plates, ramps or other appropriate devices; (4) Mini-high platforms, with multiple mini-high platforms or... chooses a means of meeting the performance standard other than using car-borne lifts, it must perform a comparison of the costs (capital, operating, and life-cycle costs) of car-borne lifts and the means chosen by...
Code of Federal Regulations, 2014 CFR
2014-10-01
... plates, ramps or other appropriate devices; (4) Mini-high platforms, with multiple mini-high platforms or... chooses a means of meeting the performance standard other than using car-borne lifts, it must perform a comparison of the costs (capital, operating, and life-cycle costs) of car-borne lifts and the means chosen by...
Code of Federal Regulations, 2011 CFR
2011-10-01
... plates, ramps or other appropriate devices; (4) Mini-high platforms, with multiple mini-high platforms or... chooses a means of meeting the performance standard other than using car-borne lifts, it must perform a comparison of the costs (capital, operating, and life-cycle costs) of car-borne lifts and the means chosen by...
Code of Federal Regulations, 2013 CFR
2013-10-01
... plates, ramps or other appropriate devices; (4) Mini-high platforms, with multiple mini-high platforms or... chooses a means of meeting the performance standard other than using car-borne lifts, it must perform a comparison of the costs (capital, operating, and life-cycle costs) of car-borne lifts and the means chosen by...
Workload Characterization of CFD Applications Using Partial Differential Equation Solvers
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Domanski, Dominik; Borchers, Christoph H
2012-09-01
The analytical performance of a standard-flow ultra-high-performance liquid chromatography (UHPLC) and a nano-flow high-performance liquid chromatography (HPLC) system, interfaced to the same state-of-the-art triple-quadrupole mass spectrometer, were compared for the multiple reaction monitoring (MRM)-mass spectrometry (MS)-based quantitation of a panel of 48 high-to-moderate-abundance cardiovascular disease-related plasma proteins. After optimization of the MRM transitions for sensitivity and testing for chemical interference, the optimum sensitivity, loading capacity, gradient, and retention-time reproducibilities were determined. We previously demonstrated the increased robustness of the standard-flow platform, but we expected that the standard-flow platform would have an overall lower sensitivity. This study was designed to determine if this decreased sensitivity could be compensated for by increased sample loading. Significantly fewer interferences with the MRM transitions were found for the standard-flow platform than for the nano-flow platform (2 out of 103 transitions compared with 42 out of 103 transitions, respectively), which demonstrates the importance of interference-testing when nano-flow systems are used. Using only interference-free transitions, 36 replicate LC/MRM-MS analyses resulted in equal signal reproducibilities between the two platforms (9.3 % coefficient of variation (CV) for 88 peptide targets), with superior retention-time precision for the standard-flow platform (0.13 vs. 6.1 % CV). Surprisingly, for 41 of the 81 proteotypic peptides in the final assay, the standard-flow platform was more sensitive while for 9 of 81 the nano-flow platform was more sensitive. For these 81 peptides, there was a good correlation between the two sets of results (R(2) = 0.98, slope = 0.97). Overall, the standard-flow platform had superior performance metrics for most peptides, and is a good choice if sufficient sample is available.
NASA Astrophysics Data System (ADS)
Faerber, Christian
2017-10-01
The LHCb experiment at the LHC will upgrade its detector by 2018/2019 to a ‘triggerless’ readout scheme, where all the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40 MHz. This increases the data bandwidth from the detector down to the Event Filter farm to 40 TBit/s, which also has to be processed to select the interesting proton-proton collision for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered for use inside the new Event Filter farm. In the high performance computing sector more and more FPGA compute accelerators are used to improve the compute performance and reduce the power consumption (e.g. in the Microsoft Catapult project and Bing search engine). Also for the LHCb upgrade the usage of an experimental FPGA accelerated computing platform in the Event Building or in the Event Filter farm is being considered and therefore tested. This platform from Intel hosts a general CPU and a high performance FPGA linked via a high speed link which is for this platform a QPI link. On the FPGA an accelerator is implemented. The used system is a two socket platform from Intel with a Xeon CPU and an FPGA. The FPGA has cache-coherent memory access to the main memory of the server and can collaborate with the CPU. As a first step, a computing intensive algorithm to reconstruct Cherenkov angles for the LHCb RICH particle identification was successfully ported in Verilog to the Intel Xeon/FPGA platform and accelerated by a factor of 35. The same algorithm was ported to the Intel Xeon/FPGA platform with OpenCL. The implementation work and the performance will be compared. Also another FPGA accelerator the Nallatech 385 PCIe accelerator with the same Stratix V FPGA were tested for performance. The results show that the Intel Xeon/FPGA platforms, which are built in general for high performance computing, are also very interesting for the High Energy Physics community.
GPU-based High-Performance Computing for Radiation Therapy
Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.
2014-01-01
Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639
Predicted Performance of a Thrust-Enhanced SR-71 Aircraft with an External Payload
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1997-01-01
NASA Dryden Flight Research Center has completed a preliminary performance analysis of the SR-71 aircraft for use as a launch platform for high-speed research vehicles and for carrying captive experimental packages to high altitude and Mach number conditions. Externally mounted research platforms can significantly increase drag, limiting test time and, in extreme cases, prohibiting penetration through the high-drag, transonic flight regime. To provide supplemental SR-71 acceleration, methods have been developed that could increase the thrust of the J58 turbojet engines. These methods include temperature and speed increases and augmentor nitrous oxide injection. The thrust-enhanced engines would allow the SR-71 aircraft to carry higher drag research platforms than it could without enhancement. This paper presents predicted SR-71 performance with and without enhanced engines. A modified climb-dive technique is shown to reduce fuel consumption when flying through the transonic flight regime with a large external payload. Estimates are included of the maximum platform drag profiles with which the aircraft could still complete a high-speed research mission. In this case, enhancement was found to increase the SR-71 payload drag capability by 25 percent. The thrust enhancement techniques and performance prediction methodology are described.
NASA Astrophysics Data System (ADS)
Beck, Jeffrey; Bos, Jeremy P.
2017-05-01
We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.
Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments
Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu
2017-01-01
High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments. PMID:28835734
Embedded-Based Graphics Processing Unit Cluster Platform for Multiple Sequence Alignments.
Wei, Jyh-Da; Cheng, Hui-Jun; Lin, Chun-Yuan; Ye, Jin; Yeh, Kuan-Yu
2017-01-01
High-end graphics processing units (GPUs), such as NVIDIA Tesla/Fermi/Kepler series cards with thousands of cores per chip, are widely applied to high-performance computing fields in a decade. These desktop GPU cards should be installed in personal computers/servers with desktop CPUs, and the cost and power consumption of constructing a GPU cluster platform are very high. In recent years, NVIDIA releases an embedded board, called Jetson Tegra K1 (TK1), which contains 4 ARM Cortex-A15 CPUs and 192 Compute Unified Device Architecture cores (belong to Kepler GPUs). Jetson Tegra K1 has several advantages, such as the low cost, low power consumption, and high applicability, and it has been applied into several specific applications. In our previous work, a bioinformatics platform with a single TK1 (STK platform) was constructed, and this previous work is also used to prove that the Web and mobile services can be implemented in the STK platform with a good cost-performance ratio by comparing a STK platform with the desktop CPU and GPU. In this work, an embedded-based GPU cluster platform will be constructed with multiple TK1s (MTK platform). Complex system installation and setup are necessary procedures at first. Then, 2 job assignment modes are designed for the MTK platform to provide services for users. Finally, ClustalW v2.0.11 and ClustalWtk will be ported to the MTK platform. The experimental results showed that the speedup ratios achieved 5.5 and 4.8 times for ClustalW v2.0.11 and ClustalWtk, respectively, by comparing 6 TK1s with a single TK1. The MTK platform is proven to be useful for multiple sequence alignments.
The pitfalls of platform comparison: DNA copy number array technologies assessed
2009-01-01
Background The accurate and high resolution mapping of DNA copy number aberrations has become an important tool by which to gain insight into the mechanisms of tumourigenesis. There are various commercially available platforms for such studies, but there remains no general consensus as to the optimal platform. There have been several previous platform comparison studies, but they have either described older technologies, used less-complex samples, or have not addressed the issue of the inherent biases in such comparisons. Here we describe a systematic comparison of data from four leading microarray technologies (the Affymetrix Genome-wide SNP 5.0 array, Agilent High-Density CGH Human 244A array, Illumina HumanCNV370-Duo DNA Analysis BeadChip, and the Nimblegen 385 K oligonucleotide array). We compare samples derived from primary breast tumours and their corresponding matched normals, well-established cancer cell lines, and HapMap individuals. By careful consideration and avoidance of potential sources of bias, we aim to provide a fair assessment of platform performance. Results By performing a theoretical assessment of the reproducibility, noise, and sensitivity of each platform, notable differences were revealed. Nimblegen exhibited between-replicate array variances an order of magnitude greater than the other three platforms, with Agilent slightly outperforming the others, and a comparison of self-self hybridizations revealed similar patterns. An assessment of the single probe power revealed that Agilent exhibits the highest sensitivity. Additionally, we performed an in-depth visual assessment of the ability of each platform to detect aberrations of varying sizes. As expected, all platforms were able to identify large aberrations in a robust manner. However, some focal amplifications and deletions were only detected in a subset of the platforms. Conclusion Although there are substantial differences in the design, density, and number of replicate probes, the comparison indicates a generally high level of concordance between platforms, despite differences in the reproducibility, noise, and sensitivity. In general, Agilent tended to be the best aCGH platform and Affymetrix, the superior SNP-CGH platform, but for specific decisions the results described herein provide a guide for platform selection and study design, and the dataset a resource for more tailored comparisons. PMID:19995423
NASA Astrophysics Data System (ADS)
Zheng, Yisheng; Li, Qingpin; Yan, Bo; Luo, Yajun; Zhang, Xinong
2018-05-01
In order to improve the isolation performance of passive Stewart platforms, the negative stiffness magnetic spring (NSMS) is employed to construct high static low dynamic stiffness (HSLDS) struts. With the NSMS, the resonance frequencies of the platform can be reduced effectively without deteriorating its load bearing capacity. The model of the Stewart isolation platform with HSLDS struts is presented and the stiffness characteristic of its struts is studied firstly. Then the nonlinear dynamic model of the platform including both geometry nonlinearity and stiffness nonlinearity is established; and its simplified dynamic model is derived under the condition of small vibration. The effect of nonlinearity on the isolation performance is also evaluated. Finally, a prototype is built and the isolation performance is tested. Both simulated and experimental results demonstrate that, by using the NSMS, the resonance frequencies of the Stewart isolator are reduced and the isolation performance in all six directions is improved: the isolation frequency band is increased and extended to a lower-frequency level.
Particle Identification on an FPGA Accelerated Compute Platform for the LHCb Upgrade
NASA Astrophysics Data System (ADS)
Fäerber, Christian; Schwemmer, Rainer; Machen, Jonathan; Neufeld, Niko
2017-07-01
The current LHCb readout system will be upgraded in 2018 to a “triggerless” readout of the entire detector at the Large Hadron Collider collision rate of 40 MHz. The corresponding bandwidth from the detector down to the foreseen dedicated computing farm (event filter farm), which acts as the trigger, has to be increased by a factor of almost 100 from currently 500 Gb/s up to 40 Tb/s. The event filter farm will preanalyze the data and will select the events on an event by event basis. This will reduce the bandwidth down to a manageable size to write the interesting physics data to tape. The design of such a system is a challenging task, and the reason why different new technologies are considered and have to be investigated for the different parts of the system. For the usage in the event building farm or in the event filter farm (trigger), an experimental field programmable gate array (FPGA) accelerated computing platform is considered and, therefore, tested. FPGA compute accelerators are used more and more in standard servers such as for Microsoft Bing search or Baidu search. The platform we use hosts a general Intel CPU and a high-performance FPGA linked via the high-speed Intel QuickPath Interconnect. An accelerator is implemented on the FPGA. It is very likely that these platforms, which are built, in general, for high-performance computing, are also very interesting for the high-energy physics community. First, the performance results of smaller test cases performed at the beginning are presented. Afterward, a part of the existing LHCb RICH particle identification is tested and is ported to the experimental FPGA accelerated platform. We have compared the performance of the LHCb RICH particle identification running on a normal CPU with the performance of the same algorithm, which is running on the Xeon-FPGA compute accelerator platform.
Forreryd, Andy; Johansson, Henrik; Albrekt, Ann-Sofie; Lindstedt, Malin
2014-05-16
Allergic contact dermatitis (ACD) develops upon exposure to certain chemical compounds termed skin sensitizers. To reduce the occurrence of skin sensitizers, chemicals are regularly screened for their capacity to induce sensitization. The recently developed Genomic Allergen Rapid Detection (GARD) assay is an in vitro alternative to animal testing for identification of skin sensitizers, classifying chemicals by evaluating transcriptional levels of a genomic biomarker signature. During assay development and biomarker identification, genome-wide expression analysis was applied using microarrays covering approximately 30,000 transcripts. However, the microarray platform suffers from drawbacks in terms of low sample throughput, high cost per sample and time consuming protocols and is a limiting factor for adaption of GARD into a routine assay for screening of potential sensitizers. With the purpose to simplify assay procedures, improve technical parameters and increase sample throughput, we assessed the performance of three high throughput gene expression platforms--nCounter®, BioMark HD™ and OpenArray®--and correlated their performance metrics against our previously generated microarray data. We measured the levels of 30 transcripts from the GARD biomarker signature across 48 samples. Detection sensitivity, reproducibility, correlations and overall structure of gene expression measurements were compared across platforms. Gene expression data from all of the evaluated platforms could be used to classify most of the sensitizers from non-sensitizers in the GARD assay. Results also showed high data quality and acceptable reproducibility for all platforms but only medium to poor correlations of expression measurements across platforms. In addition, evaluated platforms were superior to the microarray platform in terms of cost efficiency, simplicity of protocols and sample throughput. We evaluated the performance of three non-array based platforms using a limited set of transcripts from the GARD biomarker signature. We demonstrated that it was possible to achieve acceptable discriminatory power in terms of separation between sensitizers and non-sensitizers in the GARD assay while reducing assay costs, simplify assay procedures and increase sample throughput by using an alternative platform, providing a first step towards the goal to prepare GARD for formal validation and adaption of the assay for industrial screening of potential sensitizers.
A Sub-Orbital Platform for Flight Tests of Small Space Capsules
NASA Astrophysics Data System (ADS)
Pereira, P. Moraes A. L., Jr.; Silva, C. R.; Villas Bôas, D. J.; Corrêa, F., Jr.; Miyoshi, J. H.; Loures da Costa, L. E.
2002-01-01
In the development of a small recoverable space capsule, flight tests using sub-orbital rockets are considered. For this test series, a platform for aerodynamic and thermal measurements as also for qualification tests of onboard sub-systems and equipment was specified and is actually under development. This platform, known as SARA Suborbital, is specified to withstand a sub-orbital flight with the high performance sounding rocket VS40 and to be recovered at the sea. To perform the testing program, a flight trajectory with adequate aeroballistic parameters, as for instance high velocities in dense atmosphere and average re-entry velocity, is considered. The testing program includes measurements of aerodynamic pressures and thermal characteristics, three- axis acceleration, acoustic pressure level inside the platform and vibration environment. Beside this, tests to characterise the performance of the data acquisition and transmission system, the micro-gravity environment and to qualify the recovery system will be carried out. During the return flight, the dynamics of parachutes deployment and platform water impact, as also rescue procedures will also be observed. The present article shows the concept of the platform, describes in detail the experiments, and concludes with a discussion on the flight trajectory and recovery procedure.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
C-SPECT - a Clinical Cardiac SPECT/Tct Platform: Design Concepts and Performance Potential
Chang, Wei; Ordonez, Caesar E.; Liang, Haoning; Li, Yusheng; Liu, Jingai
2013-01-01
Because of scarcity of photons emitted from the heart, clinical cardiac SPECT imaging is mainly limited by photon statistics. The sub-optimal detection efficiency of current SPECT systems not only limits the quality of clinical cardiac SPECT imaging but also makes more advanced potential applications difficult to be realized. We propose a high-performance system platform - C-SPECT, which has its sampling geometry optimized for detection of emitted photons in quality and quantity. The C-SPECT has a stationary C-shaped gantry that surrounds the left-front side of a patient’s thorax. The stationary C-shaped collimator and detector systems in the gantry provide effective and efficient detection and sampling of photon emission. For cardiac imaging, the C-SPECT platform could achieve 2 to 4 times the system geometric efficiency of conventional SPECT systems at the same sampling resolution. This platform also includes an integrated transmission CT for attenuation correction. The ability of C-SPECT systems to perform sequential high-quality emission and transmission imaging could bring cost-effective high-performance to clinical imaging. In addition, a C-SPECT system could provide high detection efficiency to accommodate fast acquisition rate for gated and dynamic cardiac imaging. This paper describes the design concepts and performance potential of C-SPECT, and illustrates how these concepts can be implemented in a basic system. PMID:23885129
Measurement of baseline and orientation between distributed aerospace platforms.
Wang, Wen-Qin
2013-01-01
Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
Thermal and Power Challenges in High Performance Computing Systems
NASA Astrophysics Data System (ADS)
Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun
2009-05-01
This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.
Bhambure, R; Rathore, A S
2013-01-01
This article describes the development of a high-throughput process development (HTPD) platform for developing chromatography steps. An assessment of the platform as a tool for establishing the "characterization space" for an ion exchange chromatography step has been performed by using design of experiments. Case studies involving use of a biotech therapeutic, granulocyte colony-stimulating factor have been used to demonstrate the performance of the platform. We discuss the various challenges that arise when working at such small volumes along with the solutions that we propose to alleviate these challenges to make the HTPD data suitable for empirical modeling. Further, we have also validated the scalability of this platform by comparing the results from the HTPD platform (2 and 6 μL resin volumes) against those obtained at the traditional laboratory scale (resin volume, 0.5 mL). We find that after integration of the proposed correction factors, the HTPD platform is capable of performing the process optimization studies at 170-fold higher productivity. The platform is capable of providing semi-quantitative assessment of the effects of the various input parameters under consideration. We think that platform such as the one presented is an excellent tool for examining the "characterization space" and reducing the extensive experimentation at the traditional lab scale that is otherwise required for establishing the "design space." Thus, this platform will specifically aid in successful implementation of quality by design in biotech process development. This is especially significant in view of the constraints with respect to time and resources that the biopharma industry faces today. Copyright © 2013 American Institute of Chemical Engineers.
Modeling and analysis of a flywheel microvibration isolation system for spacecrafts
NASA Astrophysics Data System (ADS)
Wei, Zhanji; Li, Dongxu; Luo, Qing; Jiang, Jianping
2015-01-01
The microvibrations generated by flywheels running at full speed onboard high precision spacecrafts will affect stability of the spacecraft bus and further degrade pointing accuracy of the payload. A passive vibration isolation platform comprised of multi-segment zig-zag beams is proposed to isolate disturbances of the flywheel. By considering the flywheel and the platform as an integral system with gyroscopic effects, an equivalent dynamic model is developed and verified through eigenvalue and frequency response analysis. The critical speeds of the system are deduced and expressed as functions of system parameters. The vibration isolation performance of the platform under synchronal and high-order harmonic disturbances caused by the flywheel is investigated. It is found that the speed range within which the passive platform is effective and the disturbance decay rate of the system are greatly influenced by the locations of the critical speeds. Structure optimization of the platform is carried out to enhance its performance. Simulation results show that a properly designed vibration isolation platform can effectively reduce disturbances emitted by the flywheel operating above the critical speeds of the system.
High-Throughput Platform for Synthesis of Melamine-Formaldehyde Microcapsules.
Çakir, Seda; Bauters, Erwin; Rivero, Guadalupe; Parasote, Tom; Paul, Johan; Du Prez, Filip E
2017-07-10
The synthesis of microcapsules via in situ polymerization is a labor-intensive and time-consuming process, where many composition and process factors affect the microcapsule formation and its morphology. Herein, we report a novel combinatorial technique for the preparation of melamine-formaldehyde microcapsules, using a custom-made and automated high-throughput platform (HTP). After performing validation experiments for ensuring the accuracy and reproducibility of the novel platform, a design of experiment study was performed. The influence of different encapsulation parameters was investigated, such as the effect of the surfactant, surfactant type, surfactant concentration and core/shell ratio. As a result, this HTP-platform is suitable to be used for the synthesis of different types of microcapsules in an automated and controlled way, allowing the screening of different reaction parameters in a shorter time compared to the manual synthetic techniques.
An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform
NASA Technical Reports Server (NTRS)
Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak
2012-01-01
The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.
ERIC Educational Resources Information Center
Chen, Yixing
2013-01-01
The objective of this study was to develop a "Virtual Design Studio (VDS)": a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects,…
Mennenga, Sarah E; Gerson, Julia E; Dunckley, Travis; Bimonte-Nelson, Heather A
2015-01-01
Harmine is a naturally occurring monoamine oxidase inhibitor that has recently been shown to selectively inhibit the dual-specificity tyrosine-(Y)-phosphorylation-regulated kinase 1A (DYRK1A). We investigated the cognitive effects of 1mg (low) Harmine and 5mg (high) Harmine using the delayed-match-to-sample (DMS) asymmetrical 3-choice water maze task to evaluate spatial working and recent memory, and the Morris water maze task (MM) to test spatial reference memory. Animals were also tested on the visible platform task, a water-escape task with the same motor, motivational, and reinforcement components as the other tasks used to evaluate cognition, but differing in its greater simplicity and that the platform was visible above the surface of the water. A subset of the Harmine-high treated animals showed clear motor impairments on all behavioral tasks, and the visible platform task confirmed a lack of competence to perform the procedural components of water maze testing. After excluding animals from the high dose group that could not perform the procedural components of a swim task, it was revealed that both high- and low-dose treatment with Harmine enhanced performance on the latter portion of DMS testing, but had no effect on MM performance. Thus, this study demonstrates the importance of confirming motor and visual competence when studying animal cognition, and verifies the one-day visible platform task as a reliable measure of ability to perform the procedural components necessary for completion of a swim task. Copyright © 2014. Published by Elsevier Inc.
30 CFR 250.906 - What must I do to obtain approval for the proposed site of my platform?
Code of Federal Regulations, 2010 CFR
2010-07-01
... seafloor sediments. (b) Geologic surveys. You must perform a geological survey relevant to the design and... seafloor subsidence. (c) Subsurface surveys. Depending upon the design and location of your proposed... proposed site of my platform? (a) Shallow hazards surveys. You must perform a high-resolution or acoustic...
30 CFR 250.906 - What must I do to obtain approval for the proposed site of my platform?
Code of Federal Regulations, 2012 CFR
2012-07-01
... design and siting of your platform. Your geological survey must assess: (1) Seismic activity at your... seafloor subsidence. (c) Subsurface surveys. Depending upon the design and location of your proposed... the proposed site of my platform? (a) Shallow hazards surveys. You must perform a high-resolution or...
30 CFR 250.906 - What must I do to obtain approval for the proposed site of my platform?
Code of Federal Regulations, 2014 CFR
2014-07-01
... design and siting of your platform. Your geological survey must assess: (1) Seismic activity at your... seafloor subsidence. (c) Subsurface surveys. Depending upon the design and location of your proposed... the proposed site of my platform? (a) Shallow hazards surveys. You must perform a high-resolution or...
30 CFR 250.906 - What must I do to obtain approval for the proposed site of my platform?
Code of Federal Regulations, 2013 CFR
2013-07-01
... design and siting of your platform. Your geological survey must assess: (1) Seismic activity at your... seafloor subsidence. (c) Subsurface surveys. Depending upon the design and location of your proposed... the proposed site of my platform? (a) Shallow hazards surveys. You must perform a high-resolution or...
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.
2014-12-01
The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.
SOI layout decomposition for double patterning lithography on high-performance computer platforms
NASA Astrophysics Data System (ADS)
Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir
2014-12-01
In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
Validation of the iPhone app using the force platform to estimate vertical jump height.
Carlos-Vivas, Jorge; Martin-Martinez, Juan P; Hernandez-Mocholi, Miguel A; Perez-Gomez, Jorge
2018-03-01
Vertical jump performance has been evaluated with several devices: force platforms, contact mats, Vertec, accelerometers, infrared cameras and high-velocity cameras; however, the force platform is considered the gold standard for measuring vertical jump height. The purpose of this study was to validate an iPhone app called My Jump, that measures vertical jump height by comparing it with other methods that use the force platform to estimate vertical jump height, namely, vertical velocity at take-off and time in the air. A total of 40 sport sciences students (age 21.4±1.9 years) completed five countermovement jumps (CMJs) over a force platform. Thus, 200 CMJ heights were evaluated from the vertical velocity at take-off and the time in the air using the force platform, and from the time in the air with the My Jump mobile application. The height obtained was compared using the intraclass correlation coefficient (ICC). Correlation between APP and force platform using the time in the air was perfect (ICC=1.000, P<0.001). Correlation between APP and force platform using the vertical velocity at take-off was also very high (ICC=0.996, P<0.001), with an error margin of 0.78%. Therefore, these results showed that application, My Jump, is an appropriate method to evaluate the vertical jump performance; however, vertical jump height is slightly overestimated compared with that of the force platform.
Platform for Automated Real-Time High Performance Analytics on Medical Image Data.
Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A
2018-03-01
Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
Parallel k-means++ for Multiple Shared-Memory Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, Patrick S.; Lewis, Robert R.
2016-09-22
In recent years k-means++ has become a popular initialization technique for improved k-means clustering. To date, most of the work done to improve its performance has involved parallelizing algorithms that are only approximations of k-means++. In this paper we present a parallelization of the exact k-means++ algorithm, with a proof of its correctness. We develop implementations for three distinct shared-memory architectures: multicore CPU, high performance GPU, and the massively multithreaded Cray XMT platform. We demonstrate the scalability of the algorithm on each platform. In addition we present a visual approach for showing which platform performed k-means++ the fastest for varyingmore » data sizes.« less
2003-09-02
KENNEDY SPACE CENTER, FLA. - This bird's-eye view of a high bay in the Orbiter Processing Facility (OPF) shows the open payload bay of Space Shuttle Discovery surrounded by the standard platforms and equipment required to process a Space Shuttle orbiter. The high bay is 197 feet (60 meters) long, 150 feet (46 meters) wide, 95 feet (29 meters) high, and encompasses a 29,000-square-foot (2,694-meter) area. The 30-ton (27-metric-ton) bridge crane (yellow device, right) has a hook height of approximately 66 feet (20 meters). Platforms, a main access bridge, and two rolling bridges with trucks provide access to various parts of the orbiter. In addition to routine servicing and checkout, the inspections and modifications made to enhance Discovery's performance and upgrade its systems were performed in the OPF during its recently completed Orbiter Major Modification (OMM) period.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
A large array of high-performance artificial stars using airship-supported small mirrors
NASA Astrophysics Data System (ADS)
Content, Robert; Foxwell, Mark; Murray, Graham J.
2004-10-01
We propose a practical system that can provide a large number of high performance artificial stars, of the order of a few hundred, using an array of small mirrors on an airship supported platform illuminated from the ground by a laser. Our concept offers several advantages over other guide star schemes: Airborne mirror arrays can furnish tip-tilt information; they also permit a considerable reduction in the total ground-laser power required; high intensity guide stars with very small angular image size are possible; and finally they offer very low scattered parasite laser light. More basic & simpler launch-laser & AO technologies can therefore be employed, with potentially huge cost savings, with potentially significant improvement in the quality of the AO correction. The general platform scheme and suitable lift technologies are also discussed. A novel concept for achieving precise positioning is presented whereby the platform & the lifting vehicle are linked by a tether, the platform having a degree of independent control. Our proposal would employ as the lift vehicle an autonomous high altitude airship of the type currently under widespread development in the commercial sector, for use as hubs for telecommunication networks, mobile telephone relay stations, etc.
Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.
2010-01-01
Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499
Iron Opacity Platform Performance Characterization at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Opachich, Y. P.; Ross, P. W.; Heeter, R. F.; Barrios, M. A.; Liedahl, D. A.; May, M. J.; Schneider, M. B.; Craxton, R. S.; Garcia, E. M.; McKenty, P. W.; Zhang, R.; Weaver, J. L.; Flippo, K. A.; Kline, J. L.; Perry, T. S.; Los Alamos National Laboratory Collaboration; Naval Research Laboratory Collaboration; University of Rochester LaboratoryLaser Energetics Collaboration; Lawrence Livermore National Lab Collaboration; National Security Technologies, LLC Collaboration
2016-10-01
A high temperature opacity platform has been fielded at the National Ignition Facility (NIF). The platform will be used to study opacity in iron at a temperature of 160 eV. The platform uses a 6 mm diameter hohlraum driven by 128 laser beams with 530 kJ of energy in a 3 ns pulse to heat an iron sample. Absorption spectra of the heated sample are generated with a broadband pulsed X-ray backlighter produced by imploding a vacuum-filled CH shell. The shell is 2 mm in diameter and 20 microns thick, driven by 64 beams with 250 kJ in a 2.5 ns pulse. The hohlraum and backlighter performance have both been investigated recently and will be discussed in this presentation. This work was performed by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-2892.
Daskivich, Timothy J; Houman, Justin; Fuller, Garth; Black, Jeanne T; Kim, Hyung L; Spiegel, Brennan
2018-04-01
Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.
Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.
2007-01-01
The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.
HTS techniques for patch clamp-based ion channel screening - advances and economy.
Farre, Cecilia; Fertig, Niels
2012-06-01
Ten years ago, the first publication appeared showing patch clamp recordings performed on a planar glass chip instead of using a conventional patch clamp pipette. "Going planar" proved to revolutionize ion channel drug screening as we know it, by allowing high quality measurements of ion channels and their effectors at a higher throughput and at the same time de-skilling the highly laborious technique. Over the years, platforms evolved in response to user requirements regarding experimental features, data handling plus storage, and suitable target diversity. This article gives a snapshot image of patch clamp-based ion channel screening with focus on platforms developed to meet requirements of high-throughput screening environments. The commercially available platforms are described, along with their benefits and drawbacks in ion channel drug screening. Automated patch clamp (APC) platforms allow faster investigation of a larger number of ion channel active compounds or cell clones than previously possible. Since patch clamp is the only method allowing direct, real-time measurements of ion channel activity, APC holds the promise of picking up high quality leads, where they otherwise would have been overseen using indirect methods. In addition, drug candidate safety profiling can be performed earlier in the drug discovery process, avoiding late-phase compound withdrawal due to safety liability issues, which is highly costly and inefficient.
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
PR-PR: Cross-Platform Laboratory Automation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linshiz, G; Stawski, N; Goyal, G
To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Goldenmore » Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.« less
PR-PR: cross-platform laboratory automation system.
Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J
2014-08-15
To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.
Fang, Xiang; Li, Ning-qiu; Fu, Xiao-zhe; Li, Kai-bin; Lin, Qiang; Liu, Li-hui; Shi, Cun-bin; Wu, Shu-qin
2015-07-01
As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.
Avionics and Power Management for Low-Cost High-Altitude Balloon Science Platforms
NASA Technical Reports Server (NTRS)
Chin, Jeffrey; Roberts, Anthony; McNatt, Jeremiah
2016-01-01
High-altitude balloons (HABs) have become popular as educational and scientific platforms for planetary research. This document outlines key components for missions where low cost and rapid development are desired. As an alternative to ground-based vacuum and thermal testing, these systems can be flight tested at comparable costs. Communication, solar, space, and atmospheric sensing experiments often require environments where ground level testing can be challenging or impossible in certain cases. When performing HAB research the ability to monitor the status of the platform and gather data is key for both scientific and recoverability aspects of the mission. A few turnkey platform solutions are outlined that leverage rapidly evolving open-source engineering ecosystems. Rather than building custom components from scratch, these recommendations attempt to maximize simplicity and cost of HAB platforms to make launches more accessible to everyone.
Undergraduate Laboratory Module for Implementing ELISA on the High Performance Microfluidic Platform
ERIC Educational Resources Information Center
Giri, Basant; Peesara, Ravichander R.; Yanagisawa, Naoki; Dutta, Debashis
2015-01-01
Implementing enzyme-linked immunosorbent assays (ELISA) in microchannels offers several advantages over its traditional microtiter plate-based format, including a reduced sample volume requirement, shorter incubation period, and greater sensitivity. Moreover, microfluidic ELISA platforms are inexpensive to fabricate and allow integration of…
Modular HPC I/O characterization with Darshan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Shane; Carns, Philip; Harms, Kevin
2016-11-13
Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientificmore » applications and computing platforms calls for greater flexibililty and scope in I/O characterization.« less
Intra-Platform Repeatability and Inter-Platform Comparability of MicroRNA Microarray Technology
Sato, Fumiaki; Tsuchiya, Soken; Terasawa, Kazuya; Tsujimoto, Gozoh
2009-01-01
Over the last decade, DNA microarray technology has provided a great contribution to the life sciences. The MicroArray Quality Control (MAQC) project demonstrated the way to analyze the expression microarray. Recently, microarray technology has been utilized to analyze a comprehensive microRNA expression profiling. Currently, several platforms of microRNA microarray chips are commercially available. Thus, we compared repeatability and comparability of five different microRNA microarray platforms (Agilent, Ambion, Exiqon, Invitrogen and Toray) using 309 microRNAs probes, and the Taqman microRNA system using 142 microRNA probes. This study demonstrated that microRNA microarray has high intra-platform repeatability and comparability to quantitative RT-PCR of microRNA. Among the five platforms, Agilent and Toray array showed relatively better performances than the others. However, the current lineup of commercially available microRNA microarray systems fails to show good inter-platform concordance, probably because of lack of an adequate normalization method and severe divergence in stringency of detection call criteria between different platforms. This study provided the basic information about the performance and the problems specific to the current microRNA microarray systems. PMID:19436744
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
A Web Tool for Research in Nonlinear Optics
NASA Astrophysics Data System (ADS)
Prikhod'ko, Nikolay V.; Abramovsky, Viktor A.; Abramovskaya, Natalia V.; Demichev, Andrey P.; Kryukov, Alexandr P.; Polyakov, Stanislav P.
2016-02-01
This paper presents a project of developing the web platform called WebNLO for computer modeling of nonlinear optics phenomena. We discuss a general scheme of the platform and a model for interaction between the platform modules. The platform is built as a set of interacting RESTful web services (SaaS approach). Users can interact with the platform through a web browser or command line interface. Such a resource has no analogues in the field of nonlinear optics and will be created for the first time therefore allowing researchers to access high-performance computing resources that will significantly reduce the cost of the research and development process.
An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung
2011-01-01
In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less
Coverage Bias and Sensitivity of Variant Calling for Four Whole-genome Sequencing Technologies
Lasitschka, Bärbel; Jones, David; Northcott, Paul; Hutter, Barbara; Jäger, Natalie; Kool, Marcel; Taylor, Michael; Lichter, Peter; Pfister, Stefan; Wolf, Stephan; Brors, Benedikt; Eils, Roland
2013-01-01
The emergence of high-throughput, next-generation sequencing technologies has dramatically altered the way we assess genomes in population genetics and in cancer genomics. Currently, there are four commonly used whole-genome sequencing platforms on the market: Illumina’s HiSeq2000, Life Technologies’ SOLiD 4 and its completely redesigned 5500xl SOLiD, and Complete Genomics’ technology. A number of earlier studies have compared a subset of those sequencing platforms or compared those platforms with Sanger sequencing, which is prohibitively expensive for whole genome studies. Here we present a detailed comparison of the performance of all currently available whole genome sequencing platforms, especially regarding their ability to call SNVs and to evenly cover the genome and specific genomic regions. Unlike earlier studies, we base our comparison on four different samples, allowing us to assess the between-sample variation of the platforms. We find a pronounced GC bias in GC-rich regions for Life Technologies’ platforms, with Complete Genomics performing best here, while we see the least bias in GC-poor regions for HiSeq2000 and 5500xl. HiSeq2000 gives the most uniform coverage and displays the least sample-to-sample variation. In contrast, Complete Genomics exhibits by far the smallest fraction of bases not covered, while the SOLiD platforms reveal remarkable shortcomings, especially in covering CpG islands. When comparing the performance of the four platforms for calling SNPs, HiSeq2000 and Complete Genomics achieve the highest sensitivity, while the SOLiD platforms show the lowest false positive rate. Finally, we find that integrating sequencing data from different platforms offers the potential to combine the strengths of different technologies. In summary, our results detail the strengths and weaknesses of all four whole-genome sequencing platforms. It indicates application areas that call for a specific sequencing platform and disallow other platforms. This helps to identify the proper sequencing platform for whole genome studies with different application scopes. PMID:23776689
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
The mid-IR silicon photonics sensor platform (Conference Presentation)
NASA Astrophysics Data System (ADS)
Kimerling, Lionel; Hu, Juejun; Agarwal, Anuradha M.
2017-02-01
Advances in integrated silicon photonics are enabling highly connected sensor networks that offer sensitivity, selectivity and pattern recognition. Cost, performance and the evolution path of the so-called `Internet of Things' will gate the proliferation of these networks. The wavelength spectral range of 3-8um, commonly known as the mid-IR, is critical to specificity for sensors that identify materials by detection of local vibrational modes, reflectivity and thermal emission. For ubiquitous sensing applications in this regime, the sensors must move from premium to commodity level manufacturing volumes and cost. Scaling performance/cost is critically dependent on establishing a minimum set of platform attributes for point, wearable, and physical sensing. Optical sensors are ideal for non-invasive applications. Optical sensor device physics involves evanescent or intra-cavity structures for applied to concentration, interrogation and photo-catalysis functions. The ultimate utility of a platform is dependent on sample delivery/presentation modalities; system reset, recalibration and maintenance capabilities; and sensitivity and selectivity performance. The attributes and performance of a unified Glass-on-Silicon platform has shown good prospects for heterogeneous integration on materials and devices using a low cost process flow. Integrated, single mode, silicon photonic platforms offer significant performance and cost advantages, but they require discovery and qualification of new materials and process integration schemes for the mid-IR. Waveguide integrated light sources based on rare earth dopants and Ge-pumped frequency combs have promise. Optical resonators and waveguide spirals can enhance sensitivity. PbTe materials are among the best choices for a standard, waveguide integrated photodetector. Chalcogenide glasses are capable of transmitting mid-IR signals with high transparency. Integrated sensor case studies of i) high sensitivity analyte detection in solution; ii) gas sensing in air and iii) on-chip spectrometry provide good insight into the tradeoffs being made en route to ubiquitous sensor deployment in an Internet of Things.
NASA Astrophysics Data System (ADS)
Tang, Li; Liu, Jing-Ning; Feng, Dan; Tong, Wei
2008-12-01
Existing security solutions in network storage environment perform poorly because cryptographic operations (encryption and decryption) implemented in software can dramatically reduce system performance. In this paper we propose a cryptographic hardware accelerator on dynamically reconfigurable platform for the security of high performance network storage system. We employ a dynamic reconfigurable platform based on a FPGA to implement a PowerPCbased embedded system, which executes cryptographic algorithms. To reduce the reconfiguration latency, we apply prefetch scheduling. Moreover, the processing elements could be dynamically configured to support different cryptographic algorithms according to the request received by the accelerator. In the experiment, we have implemented AES (Rijndael) and 3DES cryptographic algorithms in the reconfigurable accelerator. Our proposed reconfigurable cryptographic accelerator could dramatically increase the performance comparing with the traditional software-based network storage systems.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735
Implementation and performance test of cloud platform based on Hadoop
NASA Astrophysics Data System (ADS)
Xu, Jingxian; Guo, Jianhong; Ren, Chunlan
2018-01-01
Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
An ultra-tunable platform for molecular engineering of high-performance crystalline porous materials
Zhai, Quan -Guo; Bu, Xianhui; Mao, Chengyu; ...
2016-12-07
Metal-organic frameworks are a class of crystalline porous materials with potential applications in catalysis, gas separation and storage, and so on. Of great importance is the development of innovative synthetic strategies to optimize porosity, composition and functionality to target specific applications. Here we show a platform for the development of metal-organic materials and control of their gas sorption properties. This platform can accommodate a large variety of organic ligands and homo- or hetero-metallic clusters, which allows for extraordinary tunability in gas sorption properties. Even without any strong binding sites, most members of this platform exhibit high gas uptake capacity. Asmore » a result, the high capacity is accomplished with an isosteric heat of adsorption as low as 20 kJ mol –1 for carbon dioxide, which could bring a distinct economic advantage because of the significantly reduced energy consumption for activation and regeneration of adsorbents.« less
NASA Astrophysics Data System (ADS)
Feng, Steve; Woo, Min-jae; Kim, Hannah; Kim, Eunso; Ki, Sojung; Shao, Lei; Ozcan, Aydogan
2016-03-01
We developed an easy-to-use and widely accessible crowd-sourcing tool for rapidly training humans to perform biomedical image diagnostic tasks and demonstrated this platform's ability on middle and high school students in South Korea to diagnose malaria infected red-blood-cells (RBCs) using Giemsa-stained thin blood smears imaged under light microscopes. We previously used the same platform (i.e., BioGames) to crowd-source diagnostics of individual RBC images, marking them as malaria positive (infected), negative (uninfected), or questionable (insufficient information for a reliable diagnosis). Using a custom-developed statistical framework, we combined the diagnoses from both expert diagnosticians and the minimally trained human crowd to generate a gold standard library of malaria-infection labels for RBCs. Using this library of labels, we developed a web-based training and educational toolset that provides a quantified score for diagnosticians/users to compare their performance against their peers and view misdiagnosed cells. We have since demonstrated the ability of this platform to quickly train humans without prior training to reach high diagnostic accuracy as compared to expert diagnosticians. Our initial trial group of 55 middle and high school students has collectively played more than 170 hours, each demonstrating significant improvements after only 3 hours of training games, with diagnostic scores that match expert diagnosticians'. Next, through a national-scale educational outreach program in South Korea we recruited >1660 students who demonstrated a similar performance level after 5 hours of training. We plan to further demonstrate this tool's effectiveness for other diagnostic tasks involving image labeling and aim to provide an easily-accessible and quickly adaptable framework for online training of new diagnosticians.
Wind study for high altitude platform design
NASA Technical Reports Server (NTRS)
Strganac, T. W.
1979-01-01
An analysis of upper air winds was performed to define the wind environment at potential operating altitudes for high-altitude powered platform concepts. Expected wind conditions of the contiguous United States, Pacific area (Alaska to Sea of Japan), and European area (Norwegian and Mediterranean Seas) were obtained using a representative network of sites selected based upon adequate high-altitude sampling, geographic dispersion, and observed upper wind patterns. A data base of twenty plus years of rawinsonde gathered wind information was used in the analysis. Annual variations from surface to 10 mb (approximately 31 km) pressure altitude were investigated to encompass the practical operating range for the platform concepts. Parametric analysis for the United States and foreign areas was performed to provide a basis for vehicle system design tradeoffs. This analysis of wind magnitudes indicates the feasibility of annual operation at a majority of sites and more selective seasonal operation for the extreme conditions between the pressure altitudes of 100 to 25 mb based upon the assumed design speeds.
Wind study for high altitude platform design
NASA Technical Reports Server (NTRS)
Strganac, T. W.
1979-01-01
An analysis of upper air winds was performed to define the wind environment at potential operating altitudes for high altitude powered platform concepts. Wind conditions of the continental United States, Pacific area (Alaska to Sea of Japan), and European area (Norwegian and Mediterranean Sea) were obtained using a representative network of sites selected based upon adequate high altitude sampling, geographic dispersion, and observed upper wind patterns. A data base of twenty plus years of rawinsonde gathered wind information was used in the analysis. Annual variations from surface to 10 mb pressure altitude were investigated to encompass the practical operating range for the platform concepts. Parametric analysis for the United States and foreign areas was performed to provide a basis for vehicle system design tradeoffs. This analysis of wind magnitudes indicates the feasibility of annual operation at a majority of sites and more selective seasonal operation for the extreme conditions between the pressure altitudes of 100 to 25 mb based upon the assumed design speeds.
Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1999-01-01
Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.
Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozacik, Stephen
Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.
Assessing the Potential of Stratospheric Balloons for Planetary Science
NASA Technical Reports Server (NTRS)
Kremic, Tibor; Hibbitts, Karl; Young, Eliot; Landis, Robert; Noll, Keith; Baines, Kevin
2013-01-01
Recent developments in high altitude balloon platform capabilities, specifically long duration flights in excess of 50 days at over 100,000 ft and precision pointing with performance at the arc sec level or better have raised the question whether this platform can be utilized for high-value planetary science observations. In January of 2012 a workshop was held at NASA Glenn Research Center in Cleveland, Ohio to explore what planetary science can be achieved utilizing such a platform. Over 40 science concepts were identified by the scientists and engineers attending the workshop. Those ideas were captured and then posted to a public website for all interested planetary scientists to review and give their comments. The results of the workshop, and subsequent community review, have demonstrated that this platform appears to have potential for high-value science at very competitive costs. Given these positive results, the assessment process was extended to include 1) examining, in more detail, the requirements for the gondola platform and the mission scenarios 2) identifying technical challenges and 3) developing one or more platform concepts in enough fidelity to enable accurate estimating of development and mission costs. This paper provides a review of the assessment, a summary of the achievable science and the challenges to make that science a reality with this platform.
Assessing the potential of stratospheric balloons for planetary science
NASA Astrophysics Data System (ADS)
Kremic, T.; Hibbitts, K.; Young, E.; Landis, R.; Noll, K.; Baines, K.
Recent developments in high altitude balloon platform capabilities, specifically long duration flights in excess of 50 days at over 100,000 ft and precision pointing with performance at the arc sec level or better have raised the question whether this platform can be utilized for high-value planetary science observations. In January of 2012 a workshop was held at NASA Glenn Research Center in Cleveland, Ohio to explore what planetary science can be achieved utilizing such a platform. Over 40 science concepts were identified by the scientists and engineers attending the workshop. Those ideas were captured and then posted to a public website for all interested planetary scientists to review and give their comments. The results of the workshop, and subsequent community review, have demonstrated that this platform appears to have potential for high-value science at very competitive costs. Given these positive results, the assessment process was extended to include 1) examining, in more detail, the requirements for the gondola platform and the mission scenarios 2) identifying technical challenges and 3) developing one or more platform concepts in enough fidelity to enable accurate estimating of development and mission costs. This paper provides a review of the assessment, a summary of the achievable science and the challenges to make that science a reality with this platform.
CROPPER: a metagene creator resource for cross-platform and cross-species compendium studies.
Paananen, Jussi; Storvik, Markus; Wong, Garry
2006-09-22
Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies. We have developed a web-based software resource, called CROPPER that uses the latest genomic information concerning different data identifiers and orthologous genes from the Ensembl database. CROPPER can be used to combine genomic data from different heterogeneous sources, allowing researchers to perform cross-platform/cross-species compendium studies without the need for complex computational tools or the requirement of setting up one's own in-house database. We also present an example of a simple cross-platform/cross-species compendium study based on publicly available Parkinson's disease data derived from different sources. CROPPER is a user-friendly and freely available web-based software resource that can be successfully used for cross-species/cross-platform compendium studies.
Zhang, Wei; Zong, Peisong; Zheng, Xiuwen; Wang, Libin
2013-04-15
We demonstrate a novel high-performance DNA hybridization biosensor with a carbon nanotubes (CNTs)-based nanocomposite membrane as the enhanced sensing platform. The platform was constructed by homogenously distributing ordered FePt nanoparticles (NPs) onto the CNTs matrix. The surface structure and electrochemical performance of the FePt/CNTs nanocomposite membrane were systematically investigated. Such a nanostructured composite membrane platform could combine with the advantages of FePt NPs and CNTs, greatly facilitate the electron-transfer process and the sensing behavior for DNA detection, leading to excellent sensitivity and selectivity. The complementary target genes from acute promyelocytic leukemia could be quantified in a wide range of 1.0×10⁻¹² mol/L to 1.0×10⁻⁶ mol/L using electrochemical impedance spectroscopy, and the detection limit was 2.1×10⁻¹³ mol/L under the optimal conditions. In addition, the DNA electrochemical biosensor was highly selective to discriminate single-base or double-base mismatched sequences. Copyright © 2012 Elsevier B.V. All rights reserved.
Developing one-dimensional implosions for inertial confinement fusion science
Kline, John L.; Yi, Sunghwan A.; Simakov, Andrei Nikolaevich; ...
2016-12-12
Experiments on the National Ignition Facility show that multi-dimensional effects currently dominate the implosion performance. Low mode implosion symmetry and hydrodynamic instabilities seeded by capsule mounting features appear to be two key limiting factors for implosion performance. One reason these factors have a large impact on the performance of inertial confinement fusion implosions is the high convergence required to achieve high fusion gains. To tackle these problems, a predictable implosion platform is needed meaning experiments must trade-off high gain for performance. LANL has adopted three main approaches to develop a one-dimensional (1D) implosion platform where 1D means measured yield overmore » the 1D clean calculation. A high adiabat, low convergence platform is being developed using beryllium capsules enabling larger case-to-capsule ratios to improve symmetry. The second approach is liquid fuel layers using wetted foam targets. With liquid fuel layers, the implosion convergence can be controlled via the initial vapor pressure set by the target fielding temperature. The last method is double shell targets. For double shells, the smaller inner shell houses the DT fuel and the convergence of this cavity is relatively small compared to hot spot ignition. However, double shell targets have a different set of trade-off versus advantages. As a result, details for each of these approaches are described.« less
RootJS: Node.js Bindings for ROOT 6
NASA Astrophysics Data System (ADS)
Beffart, Theo; Früh, Maximilian; Haas, Christoph; Rajgopal, Sachin; Schwabe, Jonas; Wolff, Christoph; Szuba, Marek
2017-10-01
We present rootJS, an interface making it possible to seamlessly integrate ROOT 6 into applications written for Node.js, the JavaScript runtime platform increasingly commonly used to create high-performance Web applications. ROOT features can be called both directly from Node.js code and by JIT-compiling C++ macros. All rootJS methods are invoked asynchronously and support callback functions, allowing non-blocking operation of Node.js applications using them. Last but not least, our bindings have been designed to platform-independent and should therefore work on all systems supporting both ROOT 6 and Node.js. Thanks to rootJS it is now possible to create ROOT-aware Web applications taking full advantage of the high performance and extensive capabilities of Node.js. Examples include platforms for the quality assurance of acquired, reconstructed or simulated data, book-keeping and e-log systems, and even Web browser-based data visualisation and analysis.
Integrating Reconfigurable Hardware-Based Grid for High Performance Computing
Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos
2015-01-01
FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241
Murty, Vishnu P; LaBar, Kevin S; Hamilton, Derek A; Adcock, R Alison
2011-01-01
The present study investigated the effects of approach versus avoidance motivation on declarative learning. Human participants navigated a virtual reality version of the Morris water task, a classic spatial memory paradigm, adapted to permit the experimental manipulation of motivation during learning. During this task, participants were instructed to navigate to correct platforms while avoiding incorrect platforms. To manipulate motivational states participants were either rewarded for navigating to correct locations (approach) or punished for navigating to incorrect platforms (avoidance). Participants' skin conductance levels (SCLs) were recorded during navigation to investigate the role of physiological arousal in motivated learning. Behavioral results revealed that, overall, approach motivation enhanced and avoidance motivation impaired memory performance compared to nonmotivated spatial learning. This advantage was evident across several performance indices, including accuracy, learning rate, path length, and proximity to platform locations during probe trials. SCL analysis revealed three key findings. First, within subjects, arousal interacted with approach motivation, such that high arousal on a given trial was associated with performance deficits. In addition, across subjects, high arousal negated or reversed the benefits of approach motivation. Finally, low-performing, highly aroused participants showed SCL responses similar to those of avoidance-motivation participants, suggesting that for these individuals, opportunities for reward may evoke states of learning similar to those typically evoked by threats of punishment. These results provide a novel characterization of how approach and avoidance motivation influence declarative memory and indicate a critical and selective role for arousal in determining how reinforcement influences goal-oriented learning.
Murty, Vishnu P.; LaBar, Kevin S.; Hamilton, Derek A.; Adcock, R. Alison
2011-01-01
The present study investigated the effects of approach versus avoidance motivation on declarative learning. Human participants navigated a virtual reality version of the Morris water task, a classic spatial memory paradigm, adapted to permit the experimental manipulation of motivation during learning. During this task, participants were instructed to navigate to correct platforms while avoiding incorrect platforms. To manipulate motivational states participants were either rewarded for navigating to correct locations (approach) or punished for navigating to incorrect platforms (avoidance). Participants’ skin conductance levels (SCLs) were recorded during navigation to investigate the role of physiological arousal in motivated learning. Behavioral results revealed that, overall, approach motivation enhanced and avoidance motivation impaired memory performance compared to nonmotivated spatial learning. This advantage was evident across several performance indices, including accuracy, learning rate, path length, and proximity to platform locations during probe trials. SCL analysis revealed three key findings. First, within subjects, arousal interacted with approach motivation, such that high arousal on a given trial was associated with performance deficits. In addition, across subjects, high arousal negated or reversed the benefits of approach motivation. Finally, low-performing, highly aroused participants showed SCL responses similar to those of avoidance–motivation participants, suggesting that for these individuals, opportunities for reward may evoke states of learning similar to those typically evoked by threats of punishment. These results provide a novel characterization of how approach and avoidance motivation influence declarative memory and indicate a critical and selective role for arousal in determining how reinforcement influences goal-oriented learning. PMID:22021253
Teachable, high-content analytics for live-cell, phase contrast movies.
Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J
2010-09-01
CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.
ERIC Educational Resources Information Center
Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu
2013-01-01
With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…
Spiking neural networks on high performance computer clusters
NASA Astrophysics Data System (ADS)
Chen, Chong; Taha, Tarek M.
2011-09-01
In this paper we examine the acceleration of two spiking neural network models on three clusters of multicore processors representing three categories of processors: x86, STI Cell, and NVIDIA GPGPUs. The x86 cluster utilized consists of 352 dualcore AMD Opterons, the Cell cluster consists of 320 Sony Playstation 3s, while the GPGPU cluster contains 32 NVIDIA Tesla S1070 systems. The results indicate that the GPGPU platform can dominate in performance compared to the Cell and x86 platforms examined. From a cost perspective, the GPGPU is more expensive in terms of neuron/s throughput. If the cost of GPGPUs go down in the future, this platform will become very cost effective for these models.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
NASA Astrophysics Data System (ADS)
Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.
2017-12-01
The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).
Lattice QCD simulations using the OpenACC platform
NASA Astrophysics Data System (ADS)
Majumdar, Pushan
2016-10-01
In this article we will explore the OpenACC platform for programming Graphics Processing Units (GPUs). The OpenACC platform offers a directive based programming model for GPUs which avoids the detailed data flow control and memory management necessary in a CUDA programming environment. In the OpenACC model, programs can be written in high level languages with OpenMP like directives. We present some examples of QCD simulation codes using OpenACC and discuss their performance on the Fermi and Kepler GPUs.
Universal SaaS platform of internet of things for real-time monitoring
NASA Astrophysics Data System (ADS)
Liu, Tongke; Wu, Gang
2018-04-01
Real-time monitoring service, as a member of the IoT (Internet of Things) service, has a wide range application scenario. To support rapid construction and deployment of applications and avoid repetitive development works in these processes, this paper designs and develops a universal SaaS platform of IoT for real-time monitoring. Evaluation shows that this platform can provide SaaS service to multiple tenants and achieve high real-time performance under the situation of large amount of device access.
Castell, Nuria; Dauge, Franck R; Schneider, Philipp; Vogt, Matthias; Lerner, Uri; Fishbain, Barak; Broday, David; Bartonova, Alena
2017-02-01
The emergence of low-cost, user-friendly and very compact air pollution platforms enable observations at high spatial resolution in near-real-time and provide new opportunities to simultaneously enhance existing monitoring systems, as well as engage citizens in active environmental monitoring. This provides a whole new set of capabilities in the assessment of human exposure to air pollution. However, the data generated by these platforms are often of questionable quality. We have conducted an exhaustive evaluation of 24 identical units of a commercial low-cost sensor platform against CEN (European Standardization Organization) reference analyzers, evaluating their measurement capability over time and a range of environmental conditions. Our results show that their performance varies spatially and temporally, as it depends on the atmospheric composition and the meteorological conditions. Our results show that the performance varies from unit to unit, which makes it necessary to examine the data quality of each node before its use. In general, guidance is lacking on how to test such sensor nodes and ensure adequate performance prior to marketing these platforms. We have implemented and tested diverse metrics in order to assess if the sensor can be employed for applications that require high accuracy (i.e., to meet the Data Quality Objectives defined in air quality legislation, epidemiological studies) or lower accuracy (i.e., to represent the pollution level on a coarse scale, for purposes such as awareness raising). Data quality is a pertinent concern, especially in citizen science applications, where citizens are collecting and interpreting the data. In general, while low-cost platforms present low accuracy for regulatory or health purposes they can provide relative and aggregated information about the observed air quality. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
High performance GPU processing for inversion using uniform grid searches
NASA Astrophysics Data System (ADS)
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
D-Amino acid oxidase bio-functionalized platforms: Toward an enhanced enzymatic bio-activity
NASA Astrophysics Data System (ADS)
Herrera, Elisa; Valdez Taubas, Javier; Giacomelli, Carla E.
2015-11-01
The purpose of this work is to study the adsorption process and surface bio-activity of His-tagged D-amino acid oxidase (DAAO) from Rhodotorula gracilis (His6-RgDAAO) as the first step for the development of an electrochemical bio-functionalized platform. With such a purpose this work comprises: (a) the His6-RgDAAO bio-activity in solution determined by amperometry, (b) the adsorption mechanism of His6-RgDAAO on bare gold and carboxylated modified substrates in the absence (substrate/COO-) and presence of Ni(II) (substrate/COO- + Ni(II)) determined by reflectometry, and (c) the bio-activity of the His6-RgDAAO bio-functionalized platforms determined by amperometry. Comparing the adsorption behavior and bio-activity of His6-RgDAAO on these different solid substrates allows understanding the contribution of the diverse interactions responsible for the platform performance. His6-RgDAAO enzymatic performance in solution is highly improved when compared to the previously used pig kidney (pk) DAAO. His6-RgDAAO exhibits an amperometrically detectable bio-activity at concentrations as low as those expected on a bio-functional platform; hence, it is a viable bio-recognition element of D-amino acids to be coupled to electrochemical platforms. Moreover, His6-RgDAAO bio-functionalized platforms exhibit a higher surface activity than pkDAAO physically adsorbed on gold. The platform built on Ni(II) modified substrates present enhanced bio-activity because the surface complexes histidine-Ni(II) provide with site-oriented, native-like enzymes. The adsorption mechanism responsible of the excellent performance of the bio-functionalized platform takes place in two steps involving electrostatic and bio-affinity interactions whose prevalence depends on the degree of surface coverage.
SiGe BiCMOS manufacturing platform for mmWave applications
NASA Astrophysics Data System (ADS)
Kar-Roy, Arjun; Howard, David; Preisler, Edward; Racanelli, Marco; Chaudhry, Samir; Blaschke, Volker
2010-10-01
TowerJazz offers high volume manufacturable commercial SiGe BiCMOS technology platforms to address the mmWave market. In this paper, first, the SiGe BiCMOS process technology platforms such as SBC18 and SBC13 are described. These manufacturing platforms integrate 200 GHz fT/fMAX SiGe NPN with deep trench isolation into 0.18μm and 0.13μm node CMOS processes along with high density 5.6fF/μm2 stacked MIM capacitors, high value polysilicon resistors, high-Q metal resistors, lateral PNP transistors, and triple well isolation using deep n-well for mixed-signal integration, and, multiple varactors and compact high-Q inductors for RF needs. Second, design enablement tools that maximize performance and lowers costs and time to market such as scalable PSP and HICUM models, statistical and Xsigma models, reliability modeling tools, process control model tools, inductor toolbox and transmission line models are described. Finally, demonstrations in silicon for mmWave applications in the areas of optical networking, mobile broadband, phased array radar, collision avoidance radar and W-band imaging are listed.
Software Tools for Development on the Peregrine System | High-Performance
Computing | NREL Software Tools for Development on the Peregrine System Software Tools for and manage software at the source code level. Cross-Platform Make and SCons The "Cross-Platform Make" (CMake) package is from Kitware, and SCons is a modern software build tool based on Python
Canon, Shane
2018-01-24
DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
NASA Astrophysics Data System (ADS)
Boakye-Boateng, Nasir Abdulai
The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.
Damm, Markus; Kappe, C Oliver
2011-11-30
A high-throughput platform for performing parallel solvent extractions in sealed HPLC/GC vials inside a microwave reactor is described. The system consist of a strongly microwave-absorbing silicon carbide plate with 20 cylindrical wells of appropriate dimensions to be fitted with standard HPLC/GC autosampler vials serving as extraction vessels. Due to the possibility of heating up to four heating platforms simultaneously (80 vials), efficient parallel analytical-scale solvent extractions can be performed using volumes of 0.5-1.5 mL at a maximum temperature/pressure limit of 200°C/20 bar. Since the extraction and subsequent analysis by either gas chromatography or liquid chromatography coupled with mass detection (GC-MS or LC-MS) is performed directly from the autosampler vial, errors caused by sample transfer can be minimized. The platform was evaluated for the extraction and quantification of caffeine from commercial coffee powders assessing different solvent types, extraction temperatures and times. For example, 141±11 μg caffeine (5 mg coffee powder) were extracted during a single extraction cycle using methanol as extraction solvent, whereas only 90±11 were obtained performing the extraction in methylene chloride, applying the same reaction conditions (90°C, 10 min). In multiple extraction experiments a total of ~150 μg caffeine was extracted from 5 mg commercial coffee powder. In addition to the quantitative caffeine determination, a comparative qualitative analysis of the liquid phase coffee extracts and the headspace volatiles was performed, placing special emphasis on headspace analysis using solid-phase microextraction (SPME) techniques. The miniaturized parallel extraction technique introduced herein allows solvent extractions to be performed at significantly expanded temperature/pressure limits and shortened extraction times, using standard HPLC autosampler vials as reaction vessels. Remarkable differences regarding peak pattern and main peaks were observed when low-temperature extraction (60°C) and high-temperature extraction (160°C) are compared prior to headspace-SPME-GC-MS performed in the same HPLC/GC vials. Copyright © 2011 Elsevier B.V. All rights reserved.
Experimental research on a vibration isolation platform for momentum wheel assembly
NASA Astrophysics Data System (ADS)
Zhou, Weiyong; Li, Dongxu
2013-03-01
This paper focuses on experimental research on a vibration isolation platform for momentum wheel assembly (MWA). A vibration isolation platform, consisting of four folded beams, was designed to isolate the microvibrations produced by MWA during operation. The performance of the platform was investigated with an impact test to verify the natural frequencies and damping coefficients of the system when the MWA was at rest, and with a measurement system consisting of a Kistler table and an optical tabletop to monitor the microvibrations produced when the MWA operated at stable speed. The results show that although the sixth natural frequency of the system is 26.29 Hz (1577 rev/min) when the MWA is at rest, the critical speed occurs at 2600 rev/min due to the gyroscopic effect of the flywheel, and that the platform can effectively isolate the high frequency disturbances in the 100-300 Hz range in all six degrees of freedom. Thus, the gyroscopic effect force deserves more attention in the design and analysis of vibration isolation platform for rotating wheel assembly, and the platform in this paper is particularly effective for MWA, which generally operates at high rotating speed range.
Spatial learning and memory deficits induced by exposure to iron-56-particle radiation
NASA Technical Reports Server (NTRS)
Shukitt-Hale, B.; Casadesus, G.; McEwen, J. J.; Rabin, B. M.; Joseph, J. A.
2000-01-01
It has previously been shown that exposing rats to particles of high energy and charge (HZE) disrupts the functioning of the dopaminergic system and behaviors mediated by this system, such as motor performance and an amphetamine-induced conditioned taste aversion; these adverse behavioral and neuronal effects are similar to those seen in aged animals. Because cognition declines with age, spatial learning and memory were assessed in the Morris water maze 1 month after whole-body irradiation with 1.5 Gy of 1 GeV/nucleon high-energy (56)Fe particles, to test the cognitive behavioral consequences of radiation exposure. Irradiated rats demonstrated cognitive impairment compared to the control group as seen in their increased latencies to find the hidden platform, particularly on the reversal day when the platform was moved to the opposite quadrant. Also, the irradiated group used nonspatial strategies during the probe trials (swim with no platform), i.e. less time spent in the platform quadrant, fewer crossings of and less time spent in the previous platform location, and longer latencies to the previous platform location. These findings are similar to those seen in aged rats, suggesting that an increased release of reactive oxygen species may be responsible for the induction of radiation- and age-related cognitive deficits. If these decrements in behavior also occur in humans, they may impair the ability of astronauts to perform critical tasks during long-term space travel beyond the magnetosphere.
NASA Astrophysics Data System (ADS)
Xu, Xingyuan; Wu, Jiayang; Shoeiby, Mehrdad; Nguyen, Thach G.; Chu, Sai T.; Little, Brent E.; Morandotti, Roberto; Mitchell, Arnan; Moss, David J.
2018-01-01
An arbitrary-order intensity differentiator for high-order microwave signal differentiation is proposed and experimentally demonstrated on a versatile transversal microwave photonic signal processing platform based on integrated Kerr combs. With a CMOS-compatible nonlinear micro-ring resonator, high quality Kerr combs with broad bandwidth and large frequency spacings are generated, enabling a larger number of taps and an increased Nyquist zone. By programming and shaping individual comb lines' power, calculated tap weights are realized, thus achieving a versatile microwave photonic signal processing platform. Arbitrary-order intensity differentiation is demonstrated on the platform. The RF responses are experimentally characterized, and systems demonstrations for Gaussian input signals are also performed.
CISP: Simulation Platform for Collective Instabilities in the BRing of HIAF project
NASA Astrophysics Data System (ADS)
Liu, J.; Yang, J. C.; Xia, J. W.; Yin, D. Y.; Shen, G. D.; Li, P.; Zhao, H.; Ruan, S.; Wu, B.
2018-02-01
To simulate collective instabilities during the complicated beam manipulation in the BRing (Booster Ring) of HIAF (High Intensity heavy-ion Accelerator Facility) or other high intensity accelerators, a code, named CISP (Simulation Platform for Collective Instabilities), is designed and constructed in China's IMP (Institute of Modern Physics). The CISP is a scalable multi-macroparticle simulation platform that can perform longitudinal and transverse tracking when chromaticity, space charge effect, nonlinear magnets and wakes are included. And due to its well object-oriented design, the CISP is also a basic platform used to develop many other applications (like feedback). Several simulations, completed by the CISP in this paper, agree with analytical results very well, which shows that the CISP is fully functional now and it is a powerful platform for the further collective instability research in the BRing or other accelerators. In the future, the CISP can also be extended easily into a physics control system for HIAF or other facilities.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
Performance simulation in high altitude platforms (HAPs) communications systems
NASA Astrophysics Data System (ADS)
Ulloa-Vásquez, Fernando; Delgado-Penin, J. A.
2002-07-01
This paper considers the analysis by simulation of a digital narrowband communication system for an scenario which consists of a High-Altitude aeronautical Platform (HAP) and fixed/mobile terrestrial transceivers. The aeronautical channel is modelled considering geometrical (angle of elevation vs. horizontal distance of the terrestrial reflectors) and statistical arguments and under these circumstances a serial concatenated coded digital transmission is analysed for several hypothesis related to radio-electric coverage areas. The results indicate a good feasibility for the communication system proposed and analysed.
Im, Hyeon-Gyun; Jung, Soo-Ho; Jin, Jungho; Lee, Dasom; Lee, Jaemin; Lee, Daewon; Lee, Jung-Yong; Kim, Il-Doo; Bae, Byeong-Soo
2014-10-28
We report a flexible high-performance conducting film using an embedded copper nanowire transparent conducting electrode; this material can be used as a transparent electrode platform for typical flexible optoelectronic devices. The monolithic composite structure of our transparent conducting film enables simultaneously an outstanding oxidation stability of the copper nanowire network (14 d at 80 °C), an exceptionally smooth surface topography (R(rms) < 2 nm), and an excellent opto-electrical performances (Rsh = 25 Ω sq(-1) and T = 82%). A flexible organic light emitting diode device is fabricated on the transparent conducting film to demonstrate its potential as a flexible copper nanowire electrode platform.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
270GHz SiGe BiCMOS manufacturing process platform for mmWave applications
NASA Astrophysics Data System (ADS)
Kar-Roy, Arjun; Preisler, Edward J.; Talor, George; Yan, Zhixin; Booth, Roger; Zheng, Jie; Chaudhry, Samir; Howard, David; Racanelli, Marco
2011-11-01
TowerJazz has been offering the high volume commercial SiGe BiCMOS process technology platform, SBC18, for more than a decade. In this paper, we describe the TowerJazz SBC18H3 SiGe BiCMOS process which integrates a production ready 240GHz FT / 270 GHz FMAX SiGe HBT on a 1.8V/3.3V dual gate oxide CMOS process in the SBC18 technology platform. The high-speed NPNs in SBC18H3 process have demonstrated NFMIN of ~2dB at 40GHz, a BVceo of 1.6V and a dc current gain of 1200. This state-of-the-art process also comes with P-I-N diodes with high isolation and low insertion losses, Schottky diodes capable of exceeding cut-off frequencies of 1THz, high density stacked MIM capacitors, MOS and high performance junction varactors characterized up to 50GHz, thick upper metal layers for inductors, and various resistors such as low value and high value unsilicided poly resistors, metal and nwell resistors. Applications of the SBC18H3 platform for millimeter-wave products for automotive radars, phased array radars and Wband imaging are presented.
Rad-Hard Structured ASIC Body of Knowledge
NASA Technical Reports Server (NTRS)
Heidecker, Jason
2013-01-01
Structured Application-Specific Integrated Circuit (ASIC) technology is a platform between traditional ASICs and Field-Programmable Gate Arrays (FPGA). The motivation behind structured ASICs is to combine the low nonrecurring engineering costs (NRE) costs of FPGAs with the high performance of ASICs. This report provides an overview of the structured ASIC platforms that are radiation-hardened and intended for space application
Wang, Zhongshun; Feng, Lei; Xiao, Dongyang; Li, Ning; Li, Yao; Cao, Danfeng; Shi, Zuosen; Cui, Zhanchen; Lu, Nan
2017-11-09
The performance of surface-enhanced Raman scattering (SERS) for detecting trace amounts of analytes depends highly on the enrichment of the diluted analytes into a small region that can be detected. A super-hydrophobic delivery (SHD) process is an excellent process to enrich even femtomolar analytes for SERS detection. However, it is still challenging to easily fabricate a low detection limit, high sensitivity and reproducible SHD-SERS substrate. In this article, we present a cost-effective and fewer-step method to fabricate a SHD-SERS substrate, named the "silver nanoislands on silica spheres" (SNOSS) platform. It is easily prepared via the thermal evaporation of silver onto a layer of super-hydrophobic paint, which contains single-scale surface-fluorinated silica spheres. The SNOSS platform performs reproducible detection, which brings the relative standard deviation down to 8.85% and 5.63% for detecting 10 -8 M R6G in one spot and spot-to-spot set-ups, respectively. The coefficient of determination (R 2 ) is 0.9773 for R6G. The SNOSS platform can be applied to the quantitative detection of analytes whose concentrations range from sub-micromolar to femtomolar levels.
An integrated biotechnology platform for developing sustainable chemical processes.
Barton, Nelson R; Burgard, Anthony P; Burk, Mark J; Crater, Jason S; Osterhout, Robin E; Pharkya, Priti; Steer, Brian A; Sun, Jun; Trawick, John D; Van Dien, Stephen J; Yang, Tae Hoon; Yim, Harry
2015-03-01
Genomatica has established an integrated computational/experimental metabolic engineering platform to design, create, and optimize novel high performance organisms and bioprocesses. Here we present our platform and its use to develop E. coli strains for production of the industrial chemical 1,4-butanediol (BDO) from sugars. A series of examples are given to demonstrate how a rational approach to strain engineering, including carefully designed diagnostic experiments, provided critical insights about pathway bottlenecks, byproducts, expression balancing, and commercial robustness, leading to a superior BDO production strain and process.
Development of vibration isolation platform for low amplitude vibration
NASA Astrophysics Data System (ADS)
Lee, Dae-Oen; Park, Geeyong; Han, Jae-Hung
2014-03-01
The performance of high precision payloads on board a satellite is extremely sensitive to vibration. Although vibration environment of a satellite on orbit is very gentle compared to the launch environment, even a low amplitude vibration disturbances generated by reaction wheel assembly, cryocoolers, etc may cause serious problems in performing tasks such as capturing high resolution images. The most commonly taken approach to protect sensitive payloads from performance degrading vibration is application of vibration isolator. In this paper, development of vibration isolation platform for low amplitude vibration is discussed. Firstly, single axis vibration isolator is developed by adapting three parameter model using bellows and viscous fluid. The isolation performance of the developed single axis isolator is evaluated by measuring force transmissibility. The measured transmissibility shows that both the low Q-factor (about 2) and the high roll-off rate (about -40 dB/dec) are achieved with the developed isolator. Then, six single axis isolators are combined to form Stewart platform in cubic configuration to provide multi-axis vibration isolation. The isolation performance of the developed multi-axis isolator is evaluated using a simple prototype reaction wheel model in which wheel imbalance is the major source of vibration. The transmitted force without vibration isolator is measured and compared with the transmitted force with vibration isolator. More than 20 dB reduction of the X and Y direction (radial direction of flywheel) disturbance is observed for rotating wheel speed of 100 Hz and higher.
NASA Astrophysics Data System (ADS)
Tsiokos, Dimitris M.; Dabos, George; Ketzaki, Dimitra; Weeber, Jean-Claude; Markey, Laurent; Dereux, Alain; Giesecke, Anna Lena; Porschatis, Caroline; Chmielak, Bartos; Wahlbrink, Thorsten; Rochracher, Karl; Pleros, Nikos
2017-05-01
Silicon photonics meet most fabrication requirements of standard CMOS process lines encompassing the photonics-electronics consolidation vision. Despite this remarkable progress, further miniaturization of PICs for common integration with electronics and for increasing PIC functional density is bounded by the inherent diffraction limit of light imposed by optical waveguides. Instead, Surface Plasmon Polariton (SPP) waveguides can guide light at sub-wavelength scales at the metal surface providing unique light-matter interaction properties, exploiting at the same time their metallic nature to naturally integrate with electronics in high-performance ASPICs. In this article, we demonstrate the main goals of the recently introduced H2020 project PlasmoFab towards addressing the ever increasing needs for low energy, small size and high performance mass manufactured PICs by developing a revolutionary yet CMOS-compatible fabrication platform for seamless co-integration of plasmonics with photonic and supporting electronic. We demonstrate recent advances on the hosting SiN photonic hosting platform reporting on low-loss passive SiN waveguide and Grating Coupler circuits for both the TM and TE polarization states. We also present experimental results of plasmonic gold thin-film and hybrid slot waveguide configurations that can allow for high-sensitivity sensing, providing also the ongoing activities towards replacing gold with Cu, Al or TiN metal in order to yield the same functionality over a CMOS metallic structure. Finally, the first experimental results on the co-integrated SiN+plasmonic platform are demonstrated, concluding to an initial theoretical performance analysis of the CMOS plasmo-photonic biosensor that has the potential to allow for sensitivities beyond 150000nm/RIU.
Luo, X; Huang, M; He, D; Wang, M; Zhang, Y; Jiang, P
2018-05-29
High electrical conductivity and the exposure to more active sites are crucial to boost the performance of a glucose sensor. A porous binary metal oxide nanoarray integrated on a binder-free 3D electrode is expected to offer a highly sensitive sensing platform. As a model, porous NiCo2O4 nanowire arrays supported on carbon cloth (NiCo2O4 NWA/CC) have been prepared and used for enzyme-free glucose sensing. NiCo2O4 NWA/CC shows larger effective surface area, superior electronic conductivity, and higher catalytic activity towards enzyme-free glucose sensing, with a linear range from 1 μM to 0.63 mM, a sensitivity of 4.12 mA mM-1 cm-2, and low detection limit of 0.5 μM. Moreover, NiCo2O4 NWA/CC also displays good selectivity and stability and thus, it can be reliable for glucose detection in human serum samples. These findings inspire the fabrication of a high-performance electrochemical sensing platform by preparing porous binary metal oxide nanoarrays supported on a 3D conductive substrate.
Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.
Zhang, C; Wijnen, B; Pearce, J M
2016-08-01
The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.
Thiha, Aung; Ibrahim, Fatimah
2015-05-18
The enzyme-linked Immunosorbent Assay (ELISA) is the gold standard clinical diagnostic tool for the detection and quantification of protein biomarkers. However, conventional ELISA tests have drawbacks in their requirement of time, expensive equipment and expertise for operation. Hence, for the purpose of rapid, high throughput screening and point-of-care diagnosis, researchers are miniaturizing sandwich ELISA procedures on Lab-on-a-Chip and Lab-on-Compact Disc (LOCD) platforms. This paper presents a novel integrated device to detect and interpret the ELISA test results on a LOCD platform. The system applies absorption spectrophotometry to measure the absorbance (optical density) of the sample using a monochromatic light source and optical sensor. The device performs automated analysis of the results and presents absorbance values and diagnostic test results via a graphical display or via Bluetooth to a smartphone platform which also acts as controller of the device. The efficacy of the device was evaluated by performing dengue antibody IgG ELISA on 64 hospitalized patients suspected of dengue. The results demonstrate high accuracy of the device, with 95% sensitivity and 100% specificity in detection when compared with gold standard commercial ELISA microplate readers. This sensor platform represents a significant step towards establishing ELISA as a rapid, inexpensive and automatic testing method for the purpose of point-of-care-testing (POCT) in resource-limited settings.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
Xi-cam: a versatile interface for data visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Xi-cam: a versatile interface for data visualization and analysis
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke; ...
2018-05-31
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
NASA Astrophysics Data System (ADS)
Crockett, Derick
Detailed observations of geosynchronous satellites from earth are very limited. To better inspect these high altitude satellites, the use of small, refuelable satellites is proposed. The small satellites are stationed on a carrier platform in an orbit near the population of geosynchronous satellites. A carrier platform equipped with deployable, refuelable SmallSats is a viable option to inspect geosynchronous satellites. The propellant requirement to transfer to a targeted geosynchronous satellite, perform a proximity inspection mission, and transfer back to the carrier platform in a nearby orbit is determined. Convex optimization and traditional optimization techniques are explored, determining minimum propellant trajectories. Propellant is measured by the total required change in velocity, delta-v. The trajectories were modeled in a relative reference frame using the Clohessy-Wiltshire equations. Mass estimations for the carrier platform and the SmallSat were determined by using the rocket equation. The mass estimates were compared to the mass of a single, non-refuelable satellite performing the same geosynchronous satellite inspection missions. From the minimum delta-v trajectories and the mass analysis, it is determined that using refuelable SmallSats and a carrier platform in a nearby orbit can be more efficient than using a single non-refuelable satellite to perform multiple geosynchronous satellite inspections.
Mapping of MPEG-4 decoding on a flexible architecture platform
NASA Astrophysics Data System (ADS)
van der Tol, Erik B.; Jaspers, Egbert G.
2001-12-01
In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.
High-throughput process development: I. Process chromatography.
Rathore, Anurag S; Bhambure, Rahul
2014-01-01
Chromatographic separation serves as "a workhorse" for downstream process development and plays a key role in removal of product-related, host cell-related, and process-related impurities. Complex and poorly characterized raw materials and feed material, low feed concentration, product instability, and poor mechanistic understanding of the processes are some of the critical challenges that are faced during development of a chromatographic step. Traditional process development is performed as trial-and-error-based evaluation and often leads to a suboptimal process. High-throughput process development (HTPD) platform involves an integration of miniaturization, automation, and parallelization and provides a systematic approach for time- and resource-efficient chromatography process development. Creation of such platforms requires integration of mechanistic knowledge of the process with various statistical tools for data analysis. The relevance of such a platform is high in view of the constraints with respect to time and resources that the biopharma industry faces today. This protocol describes the steps involved in performing HTPD of process chromatography step. It described operation of a commercially available device (PreDictor™ plates from GE Healthcare). This device is available in 96-well format with 2 or 6 μL well size. We also discuss the challenges that one faces when performing such experiments as well as possible solutions to alleviate them. Besides describing the operation of the device, the protocol also presents an approach for statistical analysis of the data that is gathered from such a platform. A case study involving use of the protocol for examining ion-exchange chromatography of granulocyte colony-stimulating factor (GCSF), a therapeutic product, is briefly discussed. This is intended to demonstrate the usefulness of this protocol in generating data that is representative of the data obtained at the traditional lab scale. The agreement in the data is indeed very significant (regression coefficient 0.93). We think that this protocol will be of significant value to those involved in performing high-throughput process development of process chromatography.
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-01-01
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds. PMID:27827883
Towards Autonomous Modular UAV Missions: The Detection, Geo-Location and Landing Paradigm.
Kyristsis, Sarantis; Antonopoulos, Angelos; Chanialakis, Theofilos; Stefanakis, Emmanouel; Linardos, Christos; Tripolitsiotis, Achilles; Partsinevelos, Panagiotis
2016-11-03
Nowadays, various unmanned aerial vehicle (UAV) applications become increasingly demanding since they require real-time, autonomous and intelligent functions. Towards this end, in the present study, a fully autonomous UAV scenario is implemented, including the tasks of area scanning, target recognition, geo-location, monitoring, following and finally landing on a high speed moving platform. The underlying methodology includes AprilTag target identification through Graphics Processing Unit (GPU) parallelized processing, image processing and several optimized locations and approach algorithms employing gimbal movement, Global Navigation Satellite System (GNSS) readings and UAV navigation. For the experimentation, a commercial and a custom made quad-copter prototype were used, portraying a high and a low-computational embedded platform alternative. Among the successful targeting and follow procedures, it is shown that the landing approach can be successfully performed even under high platform speeds.
Automated packaging platform for low-cost high-performance optical components manufacturing
NASA Astrophysics Data System (ADS)
Ku, Robert T.
2004-05-01
Delivering high performance integrated optical components at low cost is critical to the continuing recovery and growth of the optical communications industry. In today's market, network equipment vendors need to provide their customers with new solutions that reduce operating expenses and enable new revenue generating IP services. They must depend on the availability of highly integrated optical modules exhibiting high performance, small package size, low power consumption, and most importantly, low cost. The cost of typical optical system hardware is dominated by linecards that are in turn cost-dominated by transmitters and receivers or transceivers and transponders. Cost effective packaging of optical components in these small size modules is becoming the biggest challenge to be addressed. For many traditional component suppliers in our industry, the combination of small size, high performance, and low cost appears to be in conflict and not feasible with conventional product design concepts and labor intensive manual assembly and test. With the advent of photonic integration, there are a variety of materials, optics, substrates, active/passive devices, and mechanical/RF piece parts to manage in manufacturing to achieve high performance at low cost. The use of automation has been demonstrated to surpass manual operation in cost (even with very low labor cost) as well as product uniformity and quality. In this paper, we will discuss the value of using an automated packaging platform.for the assembly and test of high performance active components, such as 2.5Gb/s and 10 Gb/s sources and receivers. Low cost, high performance manufacturing can best be achieved by leveraging a flexible packaging platform to address a multitude of laser and detector devices, integration of electronics and handle various package bodies and fiber configurations. This paper describes the operation and results of working robotic assemblers in the manufacture of a Laser Optical Subassembly (LOS), its subsequent automated testing and burn/in process; and the placement of the LOS into a package body and hermetically sealing the package. The LOS and Package automated assembler robots have achieved a metrics of less than 1 um accuracy and 0.1 um resolution. The paper also discusses a method for the critical alignment of a single-mode fiber as the last step of the manufacturing process. This approach is in contrast to the conventional manual assembly where sub-micron fiber alignment and fixation steps are performed much earlier during the assembly process. Finally the paper discusses the value of this automated platform manufacturing approach as a key enabler for low cost small form factor optical components for the new XFP MSA class of transceiver modules.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Hardware-Assisted Large-Scale Neuroevolution for Multiagent Learning
2014-12-30
SECURITY CLASSIFICATION OF: This DURIP equipment award was used to purchase, install, and bring on-line two Berkeley Emulation Engines ( BEEs ) and two...mini- BEE machines to establish an FPGA-based high-performance multiagent training platform and its associated software. This acquisition of BEE4-W...Platform; Probabilistic Domain Transformation; Hardware-Assisted; FPGA; BEE ; Hive Brain; Multiagent. REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S
Multilevel microvibration test for performance predictions of a space optical load platform
NASA Astrophysics Data System (ADS)
Li, Shiqi; Zhang, Heng; Liu, Shiping; Wang, Yue
2018-05-01
This paper presents a framework for the multilevel microvibration analysis and test of a space optical load platform. The test framework is conducted on three levels, including instrument, subsystem, and system level. Disturbance source experimental investigations are performed to evaluate the vibration amplitude and study vibration mechanism. Transfer characteristics of space camera are validated by a subsystem test, which allows the calculation of transfer functions from various disturbance sources to optical performance outputs. In order to identify the influence of the source on the spacecraft performance, a system level microvibration measurement test has been performed on the ground. From the time domain analysis and spectrum analysis of multilevel microvibration tests, we concluded that the disturbance source has a significant effect on its installation position. After transmitted through mechanical links, the residual vibration reduces to a background noise level. In addition, the angular microvibration of the platform jitter is mainly concentrated in the rotation of y-axes. This work is applied to a real practical application involving the high resolution satellite camera system.
Geostationary platform systems concepts definition study. Volume 2: Technical, book 1
NASA Technical Reports Server (NTRS)
1980-01-01
The initial selection and definition of operational geostationary platform concepts is discussed. Candidate geostationary platform missions and payloads were identified from COMSAT, Aerospace, and NASA studies. These missions and payloads were cataloged; classified with to communications, military, or scientific uses; screened for application and compatibility with geostationary platforms; and analyzed to identify platform requirements. Two platform locations were then selected (Western Hemisphere - 110 deg W, and Atlantic - 15 deg W), and payloads allocated based on nominal and high traffic models. Trade studies were performed leading to recommendation of selected concepts. Of 30 Orbit Transfer Vehicle (0TV) configuration and operating mode options identified, 18 viable candidates compatible with the operational geostationary platform missions were selected for analysis. Each was considered using four platform operational modes - 8 or 16 year life, and serviced or nonserviced, providing a total of 72 OTV/platform-mode options. For final trade study concept selection, a cost program was developed considering payload and platform costs and weight; transportation unit and total costs for the shuttle and OTV; and operational costs such as assembly or construction time, mating time, and loiter time. Servicing costs were added for final analysis and recommended selection.
3D printing functional materials and devices (Conference Presentation)
NASA Astrophysics Data System (ADS)
McAlpine, Michael C.
2017-05-01
The development of methods for interfacing high performance functional devices with biology could impact regenerative medicine, smart prosthetics, and human-machine interfaces. Indeed, the ability to three-dimensionally interweave biological and functional materials could enable the creation of devices possessing unique geometries, properties, and functionalities. Yet, most high quality functional materials are two dimensional, hard and brittle, and require high crystallization temperatures for maximal performance. These properties render the corresponding devices incompatible with biology, which is three-dimensional, soft, stretchable, and temperature sensitive. We overcome these dichotomies by: 1) using 3D printing and scanning for customized, interwoven, anatomically accurate device architectures; 2) employing nanotechnology as an enabling route for overcoming mechanical discrepancies while retaining high performance; and 3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This three-dimensional blending of functional materials and `living' platforms may enable next-generation 3D printed devices.
Clarkson, Matthew J; Zombori, Gergely; Thompson, Steve; Totz, Johannes; Song, Yi; Espak, Miklos; Johnsen, Stian; Hawkes, David; Ourselin, Sébastien
2015-03-01
To perform research in image-guided interventions, researchers need a wide variety of software components, and assembling these components into a flexible and reliable system can be a challenging task. In this paper, the NifTK software platform is presented. A key focus has been high-performance streaming of stereo laparoscopic video data, ultrasound data and tracking data simultaneously. A new messaging library called NiftyLink is introduced that uses the OpenIGTLink protocol and provides the user with easy-to-use asynchronous two-way messaging, high reliability and comprehensive error reporting. A small suite of applications called NiftyGuide has been developed, containing lightweight applications for grabbing data, currently from position trackers and ultrasound scanners. These applications use NiftyLink to stream data into NiftyIGI, which is a workstation-based application, built on top of MITK, for visualisation and user interaction. Design decisions, performance characteristics and initial applications are described in detail. NiftyLink was tested for latency when transmitting images, tracking data, and interleaved imaging and tracking data. NiftyLink can transmit tracking data at 1,024 frames per second (fps) with latency of 0.31 milliseconds, and 512 KB images with latency of 6.06 milliseconds at 32 fps. NiftyIGI was tested, receiving stereo high-definition laparoscopic video at 30 fps, tracking data from 4 rigid bodies at 20-30 fps and ultrasound data at 20 fps with rendering refresh rates between 2 and 20 Hz with no loss of user interaction. These packages form part of the NifTK platform and have proven to be successful in a variety of image-guided surgery projects. Code and documentation for the NifTK platform are available from http://www.niftk.org . NiftyLink is provided open-source under a BSD license and available from http://github.com/NifTK/NiftyLink . The code for this paper is tagged IJCARS-2014.
Latent feature decompositions for integrative analysis of multi-platform genomic data
Gregory, Karl B.; Momin, Amin A.; Coombes, Kevin R.; Baladandayuthapani, Veerabhadran
2015-01-01
Increased availability of multi-platform genomics data on matched samples has sparked research efforts to discover how diverse molecular features interact both within and between platforms. In addition, simultaneous measurements of genetic and epigenetic characteristics illuminate the roles their complex relationships play in disease progression and outcomes. However, integrative methods for diverse genomics data are faced with the challenges of ultra-high dimensionality and the existence of complex interactions both within and between platforms. We propose a novel modeling framework for integrative analysis based on decompositions of the large number of platform-specific features into a smaller number of latent features. Subsequently we build a predictive model for clinical outcomes accounting for both within- and between-platform interactions based on Bayesian model averaging procedures. Principal components, partial least squares and non-negative matrix factorization as well as sparse counterparts of each are used to define the latent features, and the performance of these decompositions is compared both on real and simulated data. The latent feature interactions are shown to preserve interactions between the original features and not only aid prediction but also allow explicit selection of outcome-related features. The methods are motivated by and applied to, a glioblastoma multiforme dataset from The Cancer Genome Atlas to predict patient survival times integrating gene expression, microRNA, copy number and methylation data. For the glioblastoma data, we find a high concordance between our selected prognostic genes and genes with known associations with glioblastoma. In addition, our model discovers several relevant cross-platform interactions such as copy number variation associated gene dosing and epigenetic regulation through promoter methylation. On simulated data, we show that our proposed method successfully incorporates interactions within and between genomic platforms to aid accurate prediction and variable selection. Our methods perform best when principal components are used to define the latent features. PMID:26146492
A review of digital microfluidics as portable platforms for lab-on a-chip applications.
Samiei, Ehsan; Tabrizian, Maryam; Hoorfar, Mina
2016-07-07
Following the development of microfluidic systems, there has been a high tendency towards developing lab-on-a-chip devices for biochemical applications. A great deal of effort has been devoted to improve and advance these devices with the goal of performing complete sets of biochemical assays on the device and possibly developing portable platforms for point of care applications. Among the different microfluidic systems used for such a purpose, digital microfluidics (DMF) shows high flexibility and capability of performing multiplex and parallel biochemical operations, and hence, has been considered as a suitable candidate for lab-on-a-chip applications. In this review, we discuss the most recent advances in the DMF platforms, and evaluate the feasibility of developing multifunctional packages for performing complete sets of processes of biochemical assays, particularly for point-of-care applications. The progress in the development of DMF systems is reviewed from eight different aspects, including device fabrication, basic fluidic operations, automation, manipulation of biological samples, advanced operations, detection, biological applications, and finally, packaging and portability of the DMF devices. Success in developing the lab-on-a-chip DMF devices will be concluded based on the advances achieved in each of these aspects.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)
USDA-ARS?s Scientific Manuscript database
Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...
Highly Sensitive Bulk Silicon Chemical Sensors with Sub-5 nm Thin Charge Inversion Layers.
Fahad, Hossain M; Gupta, Niharika; Han, Rui; Desai, Sujay B; Javey, Ali
2018-03-27
There is an increasing demand for mass-producible, low-power gas sensors in a wide variety of industrial and consumer applications. Here, we report chemical-sensitive field-effect-transistors (CS-FETs) based on bulk silicon wafers, wherein an electrostatically confined sub-5 nm thin charge inversion layer is modulated by chemical exposure to achieve a high-sensitivity gas-sensing platform. Using hydrogen sensing as a "litmus" test, we demonstrate large sensor responses (>1000%) to 0.5% H 2 gas, with fast response (<60 s) and recovery times (<120 s) at room temperature and low power (<50 μW). On the basis of these performance metrics as well as standardized benchmarking, we show that bulk silicon CS-FETs offer similar or better sensing performance compared to emerging nanostructures semiconductors while providing a highly scalable and manufacturable platform.
NASA Astrophysics Data System (ADS)
Tsujii, Toshiaki; Harigae, Masatoshi
Recently, some feasibility studies on a regional positioning system using the quasi-zenith satellites and the geostationary satellites have been conducted in Japan. However, the geometry of this system seems to be unsatisfactory in terms of the positioning accuracy in north-south direction. In this paper, an augmented satellite positioning system by the High Altitude Platform Systems (HAPS) is proposed since the flexibility of the HAPS location is effective to improve the geometry of satellite positioning system. The improved positioning performance of the augmented system is also demonstrated.
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1990-01-01
A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.
the APL Balloonborne High Altitude Research Platform (HARP)
NASA Astrophysics Data System (ADS)
Adams, D.; Arnold, S.; Bernasconi, P.
2015-09-01
The Johns Hopkins University Applied Physics Laboratory (APL) has developed and demonstrated a multi-purpose stratospheric balloonborne gondola known as the High Altitude Research Platform (HARP). HARP provides the power, mechanical supports, thermal control, and data transmission for multiple forms of high-altitude scientific research equipment. The platform has been used for astronomy, cosmology and heliophysics experiments but can also be applied to atmospheric studies, space weather and other forms of high altitude research. HARP has executed five missions. The first was Flare Genesis from Antarctica in 1993 and the most recent was the Balloon Observation Platform for Planetary Science (BOPPS) from New Mexico in 2014. HARP will next be used to perform again the Stratospheric Terahertz Observatory mission, a mission that it first performed in 2009. The structure, composed of an aluminum framework is designed for easy transport and field assembly while providing ready access to the payload and supporting avionics. A light-weighted structure, capable of supporting Ultra-Long Duration Balloon (ULDB) flights that can last more than 100 days is available. Scientific research payloads as heavy as 600 kg (1322 pounds) and requiring up to 800 Watts electrical power can be supported. The platform comprises all subsystems required to support and operate the science payload, including both line-of-sight (LOS) and over-the-horizon (0TH) telecommunications, the latter provided by Iridium Pilot. Electrical power is produced by solar panels for multi-day missions and batteries for single-day missions. The avionics design is primarily single-string; however, use of ruggedized industrial components provides high reliability. The avionics features a Command and Control (C&C) computer and a Pointing Control System (PCS) computer housed within a common unpressurized unit. The avionics operates from ground pressure to 2 Torr and over a temperature range from —30 C to +85 C. Science data is stored on-board and also flows through the C&C computer where it is packetized for real-time downlink. The telecommunications system is capable of LOS downlink up to 3000 kbps and 0TH downlink up to 120 kbps. The pointing control system (PCS) provides three-axis attitude stability to 1 arcsec and can be used to aim at a fixed point for science observations, to perform science scans, and to track an object ephemeris. This paper provides a description of HARP, summarizes its performance on prior flights, describes its use on upcoming missions and outlines the characteristics that can be customized to meet the needs of the high altitude research community to support future missions.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Optimal design and experimental analyses of a new micro-vibration control payload-platform
NASA Astrophysics Data System (ADS)
Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen
2016-07-01
This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.
High performance 3D adaptive filtering for DSP based portable medical imaging systems
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark
2015-03-01
Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.
On the Impact of Execution Models: A Case Study in Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram
2015-05-25
Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less
Siretskiy, Alexey; Sundqvist, Tore; Voznesenskiy, Mikhail; Spjuth, Ola
2015-01-01
New high-throughput technologies, such as massively parallel sequencing, have transformed the life sciences into a data-intensive field. The most common e-infrastructure for analyzing this data consists of batch systems that are based on high-performance computing resources; however, the bioinformatics software that is built on this platform does not scale well in the general case. Recently, the Hadoop platform has emerged as an interesting option to address the challenges of increasingly large datasets with distributed storage, distributed processing, built-in data locality, fault tolerance, and an appealing programming methodology. In this work we introduce metrics and report on a quantitative comparison between Hadoop and a single node of conventional high-performance computing resources for the tasks of short read mapping and variant calling. We calculate efficiency as a function of data size and observe that the Hadoop platform is more efficient for biologically relevant data sizes in terms of computing hours for both split and un-split data files. We also quantify the advantages of the data locality provided by Hadoop for NGS problems, and show that a classical architecture with network-attached storage will not scale when computing resources increase in numbers. Measurements were performed using ten datasets of different sizes, up to 100 gigabases, using the pipeline implemented in Crossbow. To make a fair comparison, we implemented an improved preprocessor for Hadoop with better performance for splittable data files. For improved usability, we implemented a graphical user interface for Crossbow in a private cloud environment using the CloudGene platform. All of the code and data in this study are freely available as open source in public repositories. From our experiments we can conclude that the improved Hadoop pipeline scales better than the same pipeline on high-performance computing resources, we also conclude that Hadoop is an economically viable option for the common data sizes that are currently used in massively parallel sequencing. Given that datasets are expected to increase over time, Hadoop is a framework that we envision will have an increasingly important role in future biological data analysis.
Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S
2015-08-01
We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Acoustic Sensing of Ocean Turbulence
1991-12-01
quantities and of fast varying quantities, requiring high spatial resolution, fast response sensors and stable observation platforms. A classical approach to...with this type of sensor . Moum et.al. [Ref.l0] performed upper ocean observations with this instrument where they were able to 60 characterize the fine...platform orientation using the 3 axis accelerometer as tiltmeters . E. NON-ACOUSTIC DATA The non-acoustic channels on the CDV package are: 3 component
Doubek, Gustavo; Sekol, Ryan C.; Li, Jinyang; ...
2015-12-22
Precise control over catalyst surface composition and structure is necessary to improve the function of electrochemical systems. To that end, bulk metallic glass (BMG) alloys with atomically dispersed elements provide a highly processable, nanoscale platform for electrocatalysis and surface modification. Here we report on nanostructures of Pt-based BMGs that are modified with various subtractive and additive processes to improve their electrochemical performance.
Hasani-Sadrabadi, Mohammad Mahdi; Majedi, Fatemeh Sadat; VanDersarl, Jules John; Dashtimoghadam, Erfan; Ghaffarian, S Reza; Bertsch, Arnaud; Moaddel, Homayoun; Renaud, Philippe
2012-11-21
At nanoscale length scales, the properties of particles change rapidly with the slightest change in dimension. The use of a microfluidic platform enables precise control of sub-100 nm organic nanoparticles (NPs) based on polybenzimidazole. Using hydrodynamic flow focusing, we can control the size and shape of the NPs, which in turn controls a number of particle material properties. The anhydrous proton-conducting nature of the prepared NPs allowed us to make a high-performance ion exchange membrane for fuel cell applications, and microfluidic tuning of the NPs allowed us subsequently to tune the fuel cell performance.
Thiha, Aung; Ibrahim, Fatimah
2015-01-01
The enzyme-linked Immunosorbent Assay (ELISA) is the gold standard clinical diagnostic tool for the detection and quantification of protein biomarkers. However, conventional ELISA tests have drawbacks in their requirement of time, expensive equipment and expertise for operation. Hence, for the purpose of rapid, high throughput screening and point-of-care diagnosis, researchers are miniaturizing sandwich ELISA procedures on Lab-on-a-Chip and Lab-on-Compact Disc (LOCD) platforms. This paper presents a novel integrated device to detect and interpret the ELISA test results on a LOCD platform. The system applies absorption spectrophotometry to measure the absorbance (optical density) of the sample using a monochromatic light source and optical sensor. The device performs automated analysis of the results and presents absorbance values and diagnostic test results via a graphical display or via Bluetooth to a smartphone platform which also acts as controller of the device. The efficacy of the device was evaluated by performing dengue antibody IgG ELISA on 64 hospitalized patients suspected of dengue. The results demonstrate high accuracy of the device, with 95% sensitivity and 100% specificity in detection when compared with gold standard commercial ELISA microplate readers. This sensor platform represents a significant step towards establishing ELISA as a rapid, inexpensive and automatic testing method for the purpose of point-of-care-testing (POCT) in resource-limited settings. PMID:25993517
Garcia-Hermoso, Antonio; Escalante, Yolanda; Arellano, Raul; Navarro, Fernando; Domínguez, Ana M.; Saavedra, Jose M.
2013-01-01
The purpose of this study was to investigate the association between block time and final performance for each sex in 50-m and 100-m individual freestyle, distinguishing between classification (1st to 3rd, 4th to 8th, 9th to 16th) and type of starting platform (old and new) in international competitions. Twenty-six international competitions covering a 13-year period (2000-2012) were analysed retrospectively. The data corresponded to a total of 1657 swimmers’ competition histories. A two-way ANOVA (sex x classification) was performed for each event and starting platform with the Bonferroni post-hoc test, and another two-way ANOVA for sex and starting platform (sex x starting platform). Pearson’s simple correlation coefficient was used to determine correlations between the block time and the final performance. Finally, a simple linear regression analysis was done between the final time and the block time for each sex and platform. The men had shorter starting block times than the women in both events and from both platforms. For 50-m event, medalists had shorter block times than semi- finalists with the old starting platforms. Block times were directly related to performance with the old starting platforms. With the new starting platforms, however, the relationship was inverse, notably in the women’s 50-m event. The block time was related for final performance in the men’s 50- m event with the old starting platform, but with the new platform it was critical only for the women’s 50-m event. Key Points The men had shorter block times than the women in both events and with both platforms. For both distances, the swimmers had shorter block times in their starts from the new starting platform with a back plate than with the old platform. For the 50-m event with the old starting platform, the medalists had shorter block times than the semi-finalists. The new starting platform block time was only determinant in the women’s 50-m event. In order to improve performance, specific training with the new platform with a back plate should be considered. PMID:24421729
A Microfluidic Platform for High-Throughput Multiplexed Protein Quantitation
Volpetti, Francesca; Garcia-Cordero, Jose; Maerkl, Sebastian J.
2015-01-01
We present a high-throughput microfluidic platform capable of quantitating up to 384 biomarkers in 4 distinct samples by immunoassay. The microfluidic device contains 384 unit cells, which can be individually programmed with pairs of capture and detection antibody. Samples are quantitated in each unit cell by four independent MITOMI detection areas, allowing four samples to be analyzed in parallel for a total of 1,536 assays per device. We show that the device can be pre-assembled and stored for weeks at elevated temperature and we performed proof-of-concept experiments simultaneously quantitating IL-6, IL-1β, TNF-α, PSA, and GFP. Finally, we show that the platform can be used to identify functional antibody combinations by screening 64 antibody combinations requiring up to 384 unique assays per device. PMID:25680117
NASA Astrophysics Data System (ADS)
Pickworth, Louisa
2017-10-01
Hydrodynamic instabilities and asymmetries are a major obstacle in the quest to achieve ignition as they cause pre-existing capsule perturbations to grow and ultimately quench the fusion burn in experiments at the National Ignition Facility (NIF). This talk will review recent developments of the experimental platforms and techniques to measure high-mode instabilities and low-mode asymmetries in the deceleration phase of implosions. These new platforms provide a natural link between the acceleration-phase experiments and neutron performance of layered deuterium-tritium implosions. In one innovative technique, self-emission from the hot spot was enhanced with argon dopant to ``self-backlight'' the shell in-flight around peak compression. Experiments with pre-imposed 2-D perturbations measured instability growth factors, while experiments with 3-D, ``native-roughness'' perturbations measured shell integrity in the deceleration phase of implosions. In a complimentary technique, the inner surface of the shell, along with its low-mode asymmetries and high-mode perturbations were visualized in implosions using x-ray emission of a high-Z dopant added to the inner surface of the capsule. These new measurements were instrumental in revealing unexpected surprises and providing improved understanding of the role of instabilities and asymmetries on implosion performance. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Lignin and silicate based hydrogels for biosensor applications
NASA Astrophysics Data System (ADS)
Burrs, S. L.; Jairam, S.; Vanegas, D. C.; Tong, Z.; McLamore, E. S.
2013-05-01
Advances in biocompatible materials and electrocatalytic nanomaterials have extended and enhanced the field of biosensors. Immobilization of biorecognition elements on nanomaterial platforms is an efficient technique for developing high fidelity biosensors. Single layer (i.e., Langmuir-Blodgett) protein films are efficient, but disadvantages of this approach include high cost, mass transfer limitations, and Vromer competition for surface binding sites. There is a need for simple, user friendly protein-nanomaterial sensing membranes that can be developed in laboratories or classrooms (i.e., outside of the clean room). In this research, we develop high fidelity nanomaterial platforms for developing electrochemical biosensors using sustainable biomaterials and user-friendly deposition techniques. Catalytic nanomaterial platforms are developed using a combination of self assembled monolayer chemistry and electrodeposition. High performance biomaterials (e.g., nanolignin) are recovered from paper pulp waste and combined with proteins and nanomaterials to form active sensor membranes. These methods are being used to develop electrochemical biosensors for studying physiological transport in biomedical, agricultural, and environmental applications.
NCI's Transdisciplinary High Performance Scientific Data Platform
NASA Astrophysics Data System (ADS)
Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley
2016-04-01
The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and additional communities practices, and a foundation for new exploratory developments. To that end, NCI is already participating in numerous current and emerging collaborations internationally including the Earth System Grid Federation (ESGF); Climate and Weather Data from International agencies such as NASA, NOAA, and UK Met Office; Remotely Sensed Satellite Earth Imaging through collaborations through GEOS and CEOS; EU-led Ocean Data Interoperability Platform (ODIP) and Horizon2020 Earth Server2 project; as well as broader data infrastructure community activities such as Research Data Alliance (RDA). Each research community is heavily engaged in international standards such as ISO, OGC and W3C, adopting community-led conventions for data, supporting improved data organisation such as controlled vocabularies, and creating workflows that use mature APIs and data services. NCI is engaging with these communities on NERDIP to ensure that such standards are applied uniformly and tested in practice by working with the variety of data and technologies. This includes benchmarking exemplar cases from individual communities, documenting their use of standards, and evaluating their practical use of the different technologies. Such a process fully establishes the functionality and performance, and is required to safely transition when improvements or rationalisation is required. Work is now underway to extend the NERDIP platform for better utilisation in the subsurface geophysical community, including maximizing national uptake, as well as better integration with international science platforms.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Teppati Losè, L.
2017-08-01
Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (Dà-Jiāng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen - China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the onboard sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.
A DVE Time Management Simulation and Verification Platform Based on Causality Consistency Middleware
NASA Astrophysics Data System (ADS)
Zhou, Hangjun; Zhang, Wei; Peng, Yuxing; Li, Sikun
During the course of designing a time management algorithm for DVEs, the researchers always become inefficiency for the distraction from the realization of the trivial and fundamental details of simulation and verification. Therefore, a platform having realized theses details is desirable. However, this has not been achieved in any published work to our knowledge. In this paper, we are the first to design and realize a DVE time management simulation and verification platform providing exactly the same interfaces as those defined by the HLA Interface Specification. Moreover, our platform is based on a new designed causality consistency middleware and might offer the comparison of three kinds of time management services: CO, RO and TSO. The experimental results show that the implementation of the platform only costs small overhead, and that the efficient performance of it is highly effective for the researchers to merely focus on the improvement of designing algorithms.
Design and Evaluation of a Personal Digital Assistant-based Research Platform for Cochlear Implants
Ali, Hussnain; Lobo, Arthur P.; Loizou, Philipos C.
2014-01-01
This paper discusses the design, development, features, and clinical evaluation of a personal digital assistant (PDA)-based platform for cochlear implant research. This highly versatile and portable research platform allows researchers to design and perform complex experiments with cochlear implants manufactured by Cochlear Corporation with great ease and flexibility. The research platform includes a portable processor for implementing and evaluating novel speech processing algorithms, a stimulator unit which can be used for electrical stimulation and neurophysio-logic studies with animals, and a recording unit for collecting electroencephalogram/evoked potentials from human subjects. The design of the platform for real time and offline stimulation modes is discussed for electric-only and electric plus acoustic stimulation followed by results from an acute study with implant users for speech intelligibility in quiet and noisy conditions. The results are comparable with users’ clinical processor and very promising for undertaking chronic studies. PMID:23674422
NREL's high performance computing cluster. Education B.S., engineering, Colorado School of Mines and platforms. His experience with writing engineering applications in Python is being utilized for
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; Zink, Erika M; Kim, Young-Mo; Burnum-Johnson, Kristin E; Orton, Daniel J; Apffel, Alex; Ibrahim, Yehia M; Monroe, Matthew E; Moore, Ronald J; Smith, Jordan N; Ma, Jian; Renslow, Ryan S; Thomas, Dennis G; Blackwell, Anne E; Swinford, Glenn; Sausen, John; Kurulugama, Ruwan T; Eno, Nathan; Darland, Ed; Stafford, George; Fjeldsted, John; Metz, Thomas O; Teeguarden, Justin G; Smith, Richard D; Baker, Erin S
2016-12-01
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with high reproducibility (CV 6 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. This SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure.
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; Zink, Erika M.; Kim, Young-Mo; Burnum-Johnson, Kristin E.; Orton, Daniel J.; Apffel, Alex; Ibrahim, Yehia M.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Jordan N.; Ma, Jian; Renslow, Ryan S.; Thomas, Dennis G.; Blackwell, Anne E.; Swinford, Glenn; Sausen, John; Kurulugama, Ruwan T.; Eno, Nathan; Darland, Ed; Stafford, George; Fjeldsted, John; Metz, Thomas O.; Teeguarden, Justin G.; Smith, Richard D.; Baker, Erin S.
2017-01-01
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with high reproducibility (CV 6 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. This SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure. PMID:29276770
Lee, Gwenyth O.; Kosek, Peter; Lima, Aldo A.M.; Singh, Ravinder; Yori, Pablo P.; Olortegui, Maribel P.; Lamsam, Jesse L.; Oliveira, Domingos B.; Guerrant, Richard L.; Kosek, Margaret
2014-01-01
ABSTRACT Objectives: The lactulose:mannitol (L:M) diagnostic test is frequently used in field studies of environmental enteropathy (EE); however, heterogeneity in test administration and disaccharide measurement has limited the comparison of results between studies and populations. We aim to assess the agreement between L:M measurement between high-performance liquid chromatography with pulsed amperometric detection (HPLC-PAD) and liquid chromatography-tandem mass spectrometry (LC-MSMS) platforms. Methods: The L:M test was administered in a cohort of Peruvian infants considered at risk for EE. A total of 100 samples were tested for lactulose and mannitol at 3 independent laboratories: 1 running an HPLC-PAD platform and 2 running LC-MSMS platforms. Agreement between the platforms was estimated. Results: The Spearman correlation between the 2 LC-MSMS platforms was high (ρ ≥ 0.89) for mannitol, lactulose, and the L:M ratio. The correlation between the HPLC-PAD platform and LC-MSMS platform was ρ = 0.95 for mannitol, ρ = 0.70 for lactulose, and ρ = 0.43 for the L:M ratio. In addition, the HPLC-PAD platform overestimated the lowest disaccharide concentrations to the greatest degree. Conclusions: Given the large analyte concentration range, the improved accuracy of LC-MSMS has important consequences for the assessment of lactulose and mannitol following oral administration in populations at risk for EE. We recommend that researchers wishing to implement a dual-sugar test as part of a study of EE use an LC-MSMS platform to optimize the accuracy of results and increase comparability between studies. PMID:24941958
Bláha, Benjamin A F; Morris, Stephen A; Ogonah, Olotu W; Maucourant, Sophie; Crescente, Vincenzo; Rosenberg, William; Mukhopadhyay, Tarit K
2018-01-01
The time and cost benefits of miniaturized fermentation platforms can only be gained by employing complementary techniques facilitating high-throughput at small sample volumes. Microbial cell disruption is a major bottleneck in experimental throughput and is often restricted to large processing volumes. Moreover, for rigid yeast species, such as Pichia pastoris, no effective high-throughput disruption methods exist. The development of an automated, miniaturized, high-throughput, noncontact, scalable platform based on adaptive focused acoustics (AFA) to disrupt P. pastoris and recover intracellular heterologous protein is described. Augmented modes of AFA were established by investigating vessel designs and a novel enzymatic pretreatment step. Three different modes of AFA were studied and compared to the performance high-pressure homogenization. For each of these modes of cell disruption, response models were developed to account for five different performance criteria. Using multiple responses not only demonstrated that different operating parameters are required for different response optima, with highest product purity requiring suboptimal values for other criteria, but also allowed for AFA-based methods to mimic large-scale homogenization processes. These results demonstrate that AFA-mediated cell disruption can be used for a wide range of applications including buffer development, strain selection, fermentation process development, and whole bioprocess integration. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:130-140, 2018. © 2017 American Institute of Chemical Engineers.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
Validation of the three web quality dimensions of a minimally invasive surgery e-learning platform.
Ortega-Morán, Juan Francisco; Pagador, J Blas; Sánchez-Peralta, Luisa Fernanda; Sánchez-González, Patricia; Noguera, José; Burgos, Daniel; Gómez, Enrique J; Sánchez-Margallo, Francisco M
2017-11-01
E-learning web environments, including the new TELMA platform, are increasingly being used to provide cognitive training in minimally invasive surgery (MIS) to surgeons. A complete validation of this MIS e-learning platform has been performed to determine whether it complies with the three web quality dimensions: usability, content and functionality. 21 Surgeons participated in the validation trials. They performed a set of tasks in the TELMA platform, where an e-MIS validity approach was followed. Subjective (questionnaires and checklists) and objective (web analytics) metrics were analysed to achieve the complete validation of usability, content and functionality. The TELMA platform allowed access to didactic content with easy and intuitive navigation. Surgeons performed all tasks with a close-to-ideal number of clicks and amount of time. They considered the design of the website to be consistent (95.24%), organised (90.48%) and attractive (85.71%). Moreover, they gave the content a high score (4.06 out of 5) and considered it adequate for teaching purposes. The surgeons scored the professional language and content (4.35), logo (4.24) and recommendations (4.20) the highest. Regarding functionality, the TELMA platform received an acceptance of 95.24% for navigation and 90.48% for interactivity. According to the study, it seems that TELMA had an attractive design, innovative content and interactive navigation, which are three key features of an e-learning platform. TELMA successfully met the three criteria necessary for consideration as a website of quality by achieving more than 70% of agreements regarding all usability, content and functionality items validated; this constitutes a preliminary requirement for an effective e-learning platform. However, the content completeness, authoring tool and registration process required improvement. Finally, the e-MIS validity methodology used to measure the three dimensions of web quality in this work can be applied to other clinical areas or training fields. Copyright © 2017 Elsevier B.V. All rights reserved.
Microfluidic platform for optimization of crystallization conditions
NASA Astrophysics Data System (ADS)
Zhang, Shuheng; Gerard, Charline J. J.; Ikni, Aziza; Ferry, Gilles; Vuillard, Laurent M.; Boutin, Jean A.; Ferte, Nathalie; Grossier, Romain; Candoni, Nadine; Veesler, Stéphane
2017-08-01
We describe a universal, high-throughput droplet-based microfluidic platform for crystallization. It is suitable for a multitude of applications, due to its flexibility, ease of use, compatibility with all solvents and low cost. The platform offers four modular functions: droplet formation, on-line characterization, incubation and observation. We use it to generate droplet arrays with a concentration gradient in continuous long tubing, without using surfactant. We control droplet properties (size, frequency and spacing) in long tubing by using hydrodynamic empirical relations. We measure droplet chemical composition using both an off-line and a real-time on-line method. Applying this platform to a complicated chemical environment, membrane proteins, we successfully handle crystallization, suggesting that the platform is likely to perform well in other circumstances. We validate the platform for fine-gradient screening and optimization of crystallization conditions. Additional on-line detection methods may well be integrated into this platform in the future, for instance, an on-line diffraction technique. We believe this method could find applications in fields such as fluid interaction engineering, live cell study and enzyme kinetics.
P2P Technology for High-Performance Computing: An Overview
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Berry, Jason
2003-01-01
The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.
Development of the SSTL-300-S1 Composite Imager Barrel Structure
NASA Astrophysics Data System (ADS)
Hamar, Chris; Wood, Trevor; Alsami, Sami; Hallett, Ben
2014-06-01
The SSTL-300-S1 is the latest in the family of highly capable SSTL-300 platforms, providing high resolution imagery with all the existing mission performance of the heritage platform. In developing the product, SSTL has had to undertake the development of a composite imager barrel assembly, which forms the payload instrument's primary structure. Working to a nominal schedule of 24 months from requirements definition to structural qualification, the barrel's development philosophy has had to carefully balance the interdependent optical, structural and programmatic requirements. This paper provides a brief summary description of that development.
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
NASA Technical Reports Server (NTRS)
Bates, Lisa B.; Young, David T.
2012-01-01
This paper describes recent developmental testing to verify the integration of a developmental electromechanical actuator (EMA) with high rate lithium ion batteries and a cross platform extensible controller. Testing was performed at the Thrust Vector Control Research, Development and Qualification Laboratory at the NASA George C. Marshall Space Flight Center. Electric Thrust Vector Control (ETVC) systems like the EMA may significantly reduce recurring launch costs and complexity compared to heritage systems. Electric actuator mechanisms and control requirements across dissimilar platforms are also discussed with a focus on the similarities leveraged and differences overcome by the cross platform extensible common controller architecture.
Towards a magnetoresistive platform for neural signal recording
NASA Astrophysics Data System (ADS)
Sharma, P. P.; Gervasoni, G.; Albisetti, E.; D'Ercoli, F.; Monticelli, M.; Moretti, D.; Forte, N.; Rocchi, A.; Ferrari, G.; Baldelli, P.; Sampietro, M.; Benfenati, F.; Bertacco, R.; Petti, D.
2017-05-01
A promising strategy to get deeper insight on brain functionalities relies on the investigation of neural activities at the cellular and sub-cellular level. In this framework, methods for recording neuron electrical activity have gained interest over the years. Main technological challenges are associated to finding highly sensitive detection schemes, providing considerable spatial and temporal resolution. Moreover, the possibility to perform non-invasive assays would constitute a noteworthy benefit. In this work, we present a magnetoresistive platform for the detection of the action potential propagation in neural cells. Such platform allows, in perspective, the in vitro recording of neural signals arising from single neurons, neural networks and brain slices.
Earth observing system instrument pointing control modeling for polar orbiting platforms
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.
1987-01-01
An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.
D.R.O.P. The Durable Reconnaissance and Observation Platform
NASA Technical Reports Server (NTRS)
McKenzie, Clifford; Parness, Aaron
2012-01-01
The Durable Reconnaissance and Observation Platform (DROP) is a prototype robotic platform with the ability to climb concrete surfaces up to 85deg at a rate of 25cm/s, make rapid horizontal to vertical transitions, carry an audio/visual reconnaissance payload, and survive impacts from 3 meters. DROP is manufactured using a combination of selective laser sintering (SLS) and shape deposition manufacturing (SDM) techniques. The platform uses a two-wheel, two-motor design that delivers high mobility with low complexity. DROP extends microspine climbing technology from linear to rotary applications, providing improved transition ability, increased speeds, and simpler body mechanics while maintaining microspines ability to opportunistically grip rough surfaces. Various aspects of prototype design and performance are discussed, including the climbing mechanism, body design, and impact survival.
Geostationary platform systems concepts definition study. Volume 2: Technical, book 2
NASA Technical Reports Server (NTRS)
1980-01-01
A selected concept for a geostationary platform is defined in sufficient detail to identify requirements for supporting research and technology, space demonstrations, GFE interfaces, costs, and schedules. This system consists of six platforms in geostationary orbit (GEO) over the Western Hemisphere and six over the Atlantic, to satisfy the total payload set associated with the nominal traffic model. Each platform is delivered to low Earth orbit (LEO) in a single shuttle flight, already mated to its LEO to GEO transfer vehicle and ready for deployment and transfer to GEO. An alternative concept is looked at briefly for comparison of configuration and technology requirements. This alternative consists of two large platforms, one over the Western Hemisphere consisting of three docked modules, and one over the Atlantic (two docked modules), to satisfy a high traffic model. The modules are full length orbiter cargo bay payloads, mated at LEO to orbital transfer vehicles (OTVs) delivered in other shuttle flights, for transfer to GEO, rendezvous, and docking. A preliminary feasibility study of an experimental platform is also performed to demonstrate communications and platform technologies required for the operational platforms of the 1990s.
Integrated digital printing of flexible circuits for wireless sensing (Conference Presentation)
NASA Astrophysics Data System (ADS)
Mei, Ping; Whiting, Gregory L.; Schwartz, David E.; Ng, Tse Nga; Krusor, Brent S.; Ready, Steve E.; Daniel, George; Veres, Janos; Street, Bob
2016-09-01
Wireless sensing has broad applications in a wide variety of fields such as infrastructure monitoring, chemistry, environmental engineering and cold supply chain management. Further development of sensing systems will focus on achieving light weight, flexibility, low power consumption and low cost. Fully printed electronics provide excellent flexibility and customizability, as well as the potential for low cost and large area applications, but lack solutions for high-density, high-performance circuitry. Conventional electronics mounted on flexible printed circuit boards provide high performance but are not digitally fabricated or readily customizable. Incorporation of small silicon dies or packaged chips into a printed platform enables high performance without compromising flexibility or cost. At PARC, we combine high functionality c-Si CMOS and digitally printed components and interconnects to create an integrated platform that can read and process multiple discrete sensors. Our approach facilitates customization to a wide variety of sensors and user interfaces suitable for a broad range of applications including remote monitoring of health, structures and environment. This talk will describe several examples of printed wireless sensing systems. The technologies required for these sensor systems are a mix of novel sensors, printing processes, conventional microchips, flexible substrates and energy harvesting power solutions.
Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing
NASA Astrophysics Data System (ADS)
Amooie, M. A.; Moortgat, J.
2017-12-01
We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.
Monolithic liquid crystal waveguide Fourier transform spectrometer for gas species sensing
NASA Astrophysics Data System (ADS)
Chao, Tien-Hsin; Lu, Thomas T.; Davis, Scott R.; Rommel, Scott D.; Farca, George; Luey, Ben; Martin, Alan; Anderson, Michael H.
2011-04-01
Jet Propulsion Lab and Vescent Photonics Inc. and are jointly developing an innovative ultracompact (volume < 10 cm3), ultra-low power (<10-3 Watt-hours per measurement and zero power consumption when not measuring), completely non-mechanical Liquid Crystal Waveguide Fourier Transform Spectrometer (LCWFTS) that will be suitable for a variety of remote-platform, in-situ measurements. These devices are made possible by novel electro-evanescent waveguide architecture, enabling "monolithic chip-scale" Electro Optic-FTS (EO-FTS) sensors. The potential performance of these EO-FTS sensors include: i) a spectral range throughout 0.4-5 μm (25000 - 2000 cm-1), ii) high-resolution (Δλ <= 0.1 nm), iii) high-speed (< 1 ms) measurements, and iv) rugged integrated optical construction. This performance potential enables the detection and quantification of a large number of different atmospheric gases simultaneously in the same air mass and the rugged construction will enable deployment on previously inaccessible platforms. The sensor construction is also amenable for analyzing aqueous samples on remote floating or submerged platforms. We will report a proof-of-principle prototype LCWFTS sensor that has been demonstrated in the near-IR (range of 1450-1700 nm) with a 5 nm resolution. This performance is in good agreement with theoretical models, which are being used to design and build the next generation LCWFTS devices.
TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data
NASA Astrophysics Data System (ADS)
LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.
2017-12-01
Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines. Finally, we will describe the National Data Service workbench, a cloud computing platform that can access the petabyte scale data while supporting reproducible research.
65 nm LP/GP mix low cost platform for multi-media wireless and consumer applications
NASA Astrophysics Data System (ADS)
Tavel, B.; Duriez, B.; Gwoziecki, R.; Basso, M. T.; Julien, C.; Ortolland, C.; Laplanche, Y.; Fox, R.; Sabouret, E.; Detcheverry, C.; Boeuf, F.; Morin, P.; Barge, D.; Bidaud, M.; Biénacel, J.; Garnier, P.; Cooper, K.; Chapon, J. D.; Trouiller, Y.; Belledent, J.; Broekaart, M.; Gouraud, P.; Denais, M.; Huard, V.; Rochereau, K.; Difrenza, R.; Planes, N.; Marin, M.; Boret, S.; Gloria, D.; Vanbergue, S.; Abramowitz, P.; Vishnubhotla, L.; Reber, D.; Stolk, P.; Woo, M.; Arnaud, F.
2006-04-01
A complete 65 nm CMOS platform, called LP/GP Mix, has been developed employing thick oxide transistor (IO), Low Power (LP) and General Purpose (GP) devices on the same chip. Dedicated to wireless multi-media and consumer applications, this new triple gate oxide platform is low cost (+1mask only) and saves over 35% of dynamic power with the use of the low operating voltage GP. The LP/GP mix shows competitive digital performance with a ring oscillator (FO = 1) speed equal to 7 ps per stage (GP) and 6T-SRAM static power lower than 10 pA/cell (LP). Compatible with mixed-signal design requirements, transistors show high voltage gain, low mismatch factor and low flicker noise. Moreover, to address mobile phone demands, excellent RF performance has been achieved with FT = 160 GHz for LP and 280 GHz for GP nMOS transistors.
Verdijk, Noortje A; Kasteleyn, Marise J; Harmans, Lara M; Talboom, Irvin JSH; Numans, Mattijs E; Chavannes, Niels H
2017-01-01
Background Worldwide, nearly 3 million people die of chronic obstructive pulmonary disease (COPD) every year. Integrated disease management (IDM) improves disease-specific quality of life and exercise capacity for people with COPD, but can also reduce hospital admissions and hospital days. Self-management of COPD through eHealth interventions has shown to be an effective method to improve the quality and efficiency of IDM in several settings, but it remains unknown which factors influence usage of eHealth and change in behavior of patients. Objective Our study, e-Vita COPD, compares different levels of integration of Web-based self-management platforms in IDM in three primary care settings. The main aim of this study is to analyze the factors that successfully promote the use of a self-management platform for COPD patients. Methods The e-Vita COPD study compares three different approaches to incorporating eHealth via Web-based self-management platforms into IDM of COPD using a parallel cohort design. Three groups integrated the platforms to different levels. In groups 1 (high integration) and 2 (medium integration), randomization was performed to two levels of personal assistance for patients (high and low assistance); in group 3 there was no integration into disease management (none integration). Every visit to the e-Vita and Zorgdraad COPD Web platforms was tracked objectively by collecting log data (sessions and services). At the first log-in, patients completed a baseline questionnaire. Baseline characteristics were automatically extracted from the log files including age, gender, education level, scores on the Clinical COPD Questionnaire (CCQ), dyspnea scale (MRC), and quality of life questionnaire (EQ5D). To predict the use of the platforms, multiple linear regression analyses for the different independent variables were performed: integration in IDM (high, medium, none), personal assistance for the participants (high vs low), educational level, and self-efficacy level (General Self-Efficacy Scale [GSES]). All analyses were adjusted for age and gender. Results Of the 702 invited COPD patients, 215 (30.6%) registered to a platform. Of the 82 patients in group 1 (high integration IDM), 36 were in group 1A (personal assistance) and 46 in group 1B (low assistance). Of the 96 patients in group 2 (medium integration IDM), 44 were in group 2A (telephone assistance) and 52 in group 2B (low assistance). A total of 37 patients participated in group 3 (no integration IDM). In all, 107 users (49.8%) visited the platform at least once in the 15-month period. The mean number of sessions differed between the three groups (group 1: mean 10.5, SD 1.3; group 2: mean 8.8, SD 1.4; group 3: mean 3.7, SD 1.8; P=.01). The mean number of sessions differed between the high-assistance and low-assistance groups in groups 1 and 2 (high: mean 11.8, SD 1.3; low: mean 6.7, SD 1.4; F1,80=6.55, P=.01). High-assistance participants used more services (mean 45.4, SD 6.2) than low-assistance participants (mean 21.2, SD 6.8; F1,80=6.82, P=.01). No association was found between educational level and usage and between GSES and usage. Conclusions Use of a self-management platform is higher when participants receive adequate personal assistance about how to use the platform. Blended care, where digital health and usual care are integrated, will likely lead to increased use of the online program. Future research should provide additional insights into the preferences of different patient groups. Trial Registration Nederlands Trial Register NTR4098; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4098 (Archived by WebCite at http://www.webcitation.org/6qO1hqiJ1) PMID:28566268
Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance.
Zihni, Ahmed; Gerull, William D; Cavallo, Jaime A; Ge, Tianjia; Ray, Shuddhadeb; Chiu, Jason; Brunt, L Michael; Awad, Michael M
2018-03-01
Robotic platforms have the potential advantage of providing additional dexterity and precision to surgeons while performing complex laparoscopic tasks, especially for those in training. Few quantitative evaluations of surgical task performance comparing laparoscopic and robotic platforms among surgeons of varying experience levels have been done. We compared measures of quality and efficiency of Fundamentals of Laparoscopic Surgery task performance on these platforms in novices and experienced laparoscopic and robotic surgeons. Fourteen novices, 12 expert laparoscopic surgeons (>100 laparoscopic procedures performed, no robotics experience), and five expert robotic surgeons (>25 robotic procedures performed) performed three Fundamentals of Laparoscopic Surgery tasks on both laparoscopic and robotic platforms: peg transfer (PT), pattern cutting (PC), and intracorporeal suturing. All tasks were repeated three times by each subject on each platform in a randomized order. Mean completion times and mean errors per trial (EPT) were calculated for each task on both platforms. Results were compared using Student's t-test (P < 0.05 considered statistically significant). Among novices, greater errors were noted during laparoscopic PC (Lap 2.21 versus Robot 0.88 EPT, P < 0.001). Among expert laparoscopists, greater errors were noted during laparoscopic PT compared with robotic (PT: Lap 0.14 versus Robot 0.00 EPT, P = 0.04). Among expert robotic surgeons, greater errors were noted during laparoscopic PC compared with robotic (Lap 0.80 versus Robot 0.13 EPT, P = 0.02). Among expert laparoscopists, task performance was slower on the robotic platform compared with laparoscopy. In comparisons of expert laparoscopists performing tasks on the laparoscopic platform and expert robotic surgeons performing tasks on the robotic platform, expert robotic surgeons demonstrated fewer errors during the PC task (P = 0.009). Robotic assistance provided a reduction in errors at all experience levels for some laparoscopic tasks, but no benefit in the speed of task performance. Robotic assistance may provide some benefit in precision of surgical task performance. Copyright © 2017 Elsevier Inc. All rights reserved.
A space-based public service platform for terrestrial rescue operations
NASA Technical Reports Server (NTRS)
Fleisig, R.; Bernstein, J.; Cramblit, D. C.
1977-01-01
The space-based Public Service Platform (PSP) is a multibeam, high-gain communications relay satellite that can provide a variety of functions for a large number of people on earth equipped with extremely small, very low cost transceivers. This paper describes the PSP concept, the rationale used to derive the concept, the criteria for selecting specific communication functions to be performed, and the advantages of performing such functions via satellite. The discussion focuses on the benefits of using a PSP for natural disaster warning; control of attendant rescue/assistance operations; and rescue of people in downed aircraft, aboard sinking ships, lost or injured on land.
Non-Static error tracking control for near space airship loading platform
NASA Astrophysics Data System (ADS)
Ni, Ming; Tao, Fei; Yang, Jiandong
2018-01-01
A control scheme based on internal model with non-static error is presented against the uncertainty of the near space airship loading platform system. The uncertainty in the tracking table is represented as interval variations in stability and control derivatives. By formulating the tracking problem of the uncertainty system as a robust state feedback stabilization problem of an augmented system, sufficient condition for the existence of robust tracking controller is derived in the form of linear matrix inequality (LMI). Finally, simulation results show that the new method not only has better anti-jamming performance, but also improves the dynamic performance of the high-order systems.
Automated batch characterization of inkjet-printed elastomer lenses using a LEGO platform.
Sung, Yu-Lung; Garan, Jacob; Nguyen, Hoang; Hu, Zhenyu; Shih, Wei-Chuan
2017-09-10
Small, self-adhesive, inkjet-printed elastomer lenses have enabled smartphone cameras to image and resolve microscopic objects. However, the performance of different lenses within a batch is affected by hard-to-control environmental variables. We present a cost-effective platform to perform automated batch characterization of 300 lens units simultaneously for quality inspection. The system was designed and configured with LEGO bricks, 3D printed parts, and a digital camera. The scheme presented here may become the basis of a high-throughput, in-line inspection tool for quality control purposes and can also be employed for optimization of the manufacturing process.
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less
A microfluidic cell culture array with various oxygen tensions.
Peng, Chien-Chung; Liao, Wei-Hao; Chen, Ying-Hua; Wu, Chueh-Yu; Tung, Yi-Chung
2013-08-21
Oxygen tension plays an important role in regulating various cellular functions in both normal physiology and disease states. Therefore, drug testing using conventional in vitro cell models under normoxia often possesses limited prediction capability. A traditional method of setting an oxygen tension in a liquid medium is by saturating it with a gas mixture at the desired level of oxygen, which requires bulky gas cylinders, sophisticated control, and tedious interconnections. Moreover, only a single oxygen tension can be tested at the same time. In this paper, we develop a microfluidic cell culture array platform capable of performing cell culture and drug testing under various oxygen tensions simultaneously. The device is fabricated using an elastomeric material, polydimethylsiloxane (PDMS) and the well-developed multi-layer soft lithography (MSL) technique. The prototype device has 4 × 4 wells, arranged in the same dimensions as a conventional 96-well plate, for cell culture. The oxygen tensions are controlled by spatially confined oxygen scavenging chemical reactions underneath the wells using microfluidics. The platform takes advantage of microfluidic phenomena while exhibiting the combinatorial diversities achieved by microarrays. Importantly, the platform is compatible with existing cell incubators and high-throughput instruments (liquid handling systems and plate readers) for cost-effective setup and straightforward operation. Utilizing the developed platform, we successfully perform drug testing using an anti-cancer drug, triapazamine (TPZ), on adenocarcinomic human alveolar basal epithelial cell line (A549) under three oxygen tensions ranging from 1.4% to normoxia. The developed platform is promising to provide a more meaningful in vitro cell model for various biomedical applications while maintaining desired high throughput capabilities.
NASA Astrophysics Data System (ADS)
Yan, Aidong; Huang, Sheng; Li, Shuo; Zaghloul, Mohamed; Ohodnicki, Paul; Buric, Michael; Chen, Kevin P.
2017-05-01
This paper demonstrates optical fibers as high-temperature sensor platforms. Through engineering and onfiber integration of functional metal oxide sensory materials, we report the development of an integrated sensor solution to perform temperature and chemical measurements for high-temperature energy applications. Using the Rayleigh optical frequency domain reflectometry (OFDR) distributed sensing scheme, the temperature and hydrogen concentration were measured along the fiber. To overcome the weak Rayleighbackscattering intensity exhibited by conventional optical fibers, an ultrafast laser was used to enhance the Rayleigh scattering by a direct laser writing method. Using the Rayleigh-enhanced fiber as sensor platform, both temperature and hydrogen reaction were monitored at high temperature up to 750°C with 4-mm spatial resolution.
Constructing Cost-Effective and Targetable ICS Honeypots Suited for Production Networks
2015-03-26
introducing Honeyd+ has a marginal impact on performance. Notable findings are that the Raspberry Pi is the preferred hosting platform for the EtherNet/IP... Raspberry Pi or Gumstix, which is a low-cost approach to replicating multiple decoys. One hidden drawback to low- interaction honeypots is the extensive time...EtherNet/IP industrial protocol. Honeyd+ is hosted on a low-cost computing platform ( Raspberry Pi running Raspbian, approximately $50) and a high-cost
Clark, Randy T; Famoso, Adam N; Zhao, Keyan; Shaff, Jon E; Craft, Eric J; Bustamante, Carlos D; McCouch, Susan R; Aneshansley, Daniel J; Kochian, Leon V
2013-02-01
High-throughput phenotyping of root systems requires a combination of specialized techniques and adaptable plant growth, root imaging and software tools. A custom phenotyping platform was designed to capture images of whole root systems, and novel software tools were developed to process and analyse these images. The platform and its components are adaptable to a wide range root phenotyping studies using diverse growth systems (hydroponics, paper pouches, gel and soil) involving several plant species, including, but not limited to, rice, maize, sorghum, tomato and Arabidopsis. The RootReader2D software tool is free and publicly available and was designed with both user-guided and automated features that increase flexibility and enhance efficiency when measuring root growth traits from specific roots or entire root systems during large-scale phenotyping studies. To demonstrate the unique capabilities and high-throughput capacity of this phenotyping platform for studying root systems, genome-wide association studies on rice (Oryza sativa) and maize (Zea mays) root growth were performed and root traits related to aluminium (Al) tolerance were analysed on the parents of the maize nested association mapping (NAM) population. © 2012 Blackwell Publishing Ltd.
Multifunctional picoliter droplet manipulation platform and its application in single cell analysis.
Gu, Shu-Qing; Zhang, Yun-Xia; Zhu, Ying; Du, Wen-Bin; Yao, Bo; Fang, Qun
2011-10-01
We developed an automated and multifunctional microfluidic platform based on DropLab to perform flexible generation and complex manipulations of picoliter-scale droplets. Multiple manipulations including precise droplet generation, sequential reagent merging, and multistep solid-phase extraction for picoliter-scale droplets could be achieved in the present platform. The system precision in generating picoliter-scale droplets was significantly improved by minimizing the thermo-induced fluctuation of flow rate. A novel droplet fusion technique based on the difference of droplet interfacial tensions was developed without the need of special microchannel networks or external devices. It enabled sequential addition of reagents to droplets on demand for multistep reactions. We also developed an effective picoliter-scale droplet splitting technique with magnetic actuation. The difficulty in phase separation of magnetic beads from picoliter-scale droplets due to the high interfacial tension was overcome using ferromagnetic particles to carry the magnetic beads to pass through the phase interface. With this technique, multistep solid-phase extraction was achieved among picoliter-scale droplets. The present platform had the ability to perform complex multistep manipulations to picoliter-scale droplets, which is particularly required for single cell analysis. Its utility and potentials in single cell analysis were preliminarily demonstrated in achieving high-efficiency single-cell encapsulation, enzyme activity assay at the single cell level, and especially, single cell DNA purification based on solid-phase extraction.
Fuel Combustion Laboratory | Transportation Research | NREL
detection of compounds at sub-parts per billion by volume levels. A high-performance liquid chromatograph ) platform; a high-pressure (1,200- bar) direct-injection system to minimize spray physics effects; and an combustion chamber. A high-speed pressure transducer measures chamber pressure to detect fuel ignition. Air
Large Diffractive Optics for GEo-Based Earth Surveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyde, R A
2003-09-11
The natural vantage point for performing Earth-centric operations from space is geosynchronous orbit (GEO); a platform there moves at the same rate as the Earth's surface, so appears to continually ''hover'' over a fixed site on the Earth. Unlike spacecraft in other orbits, which rapidly fly-over targets, a GEO-based platform remains in-position all the time. In order to insure continual access to sites using low earth orbit (LEO) platforms, one needs a large enough constellation ({approx} 50) of spacecraft so that one is always overhead; in contrast, a single GEO platform provides continuous coverage over sites throughout Euro-Asia. This permanentmore » coverage comes, unfortunately, with a stiff price-tag; geosynchronous orbit is 36,000 km high, so space platforms there must operate at ranges roughly 100 times greater than ones located in LEO. For optical-based applications, this extreme range is difficult to deal with; for surveillance the price is a 100-fold loss of resolution, for laser weapons it is a 10,000-fold loss in flux-on-target. These huge performance penalties are almost always unacceptable, preventing us from successfully using GEO-based platforms. In practice, we are forced to either settle for brief, infrequent access to targets, or, if we demand continuous coverage, to invest in large, many-satellite, constellations. There is, fortunately, a way to use GEO-based optical platforms without incurring the huge, range-dependent, performance penalties; one must simply use bigger optics. As long as the aperture of a platform's optics increases as much as its operating range, then its performance (resolution and/or flux) does not suffer; the price for operating from GEO is simply 100-fold larger optics. This is, of course, a very stiff price; while meter-class optics may suffice for many low-earth-orbit applications, 100 meter apertures are needed in order to achieve similar performance from GEO. Since even the largest Earth-based telescope is only 10 meters in diameter, building ten-fold larger ones for GEO applications (let alone delivering and operating them there) presents major difficulties. However, since the challenges of fielding large platforms in GEO are matched by the benefits of continuous coverage, we propose a program to develop such optical platforms. In this section, we will examine a particular form of large aperture optic, using a flat diffractive lens instead of the more conventional curved reflectors considered elsewhere in this report. We will discuss both the development of this type of large aperture optics, as well as the steps necessary to use it for GEO-based Earth surveillance. In a later section of this report we will discuss another use for large diffractive optics, their application for global-reach laser weapons.« less
Villette, Vincent; Levesque, Mathieu; Miled, Amine; Gosselin, Benoit; Topolnik, Lisa
2017-01-01
Chronic electrophysiological recordings of neuronal activity combined with two-photon Ca2+ imaging give access to high resolution and cellular specificity. In addition, awake drug-free experimentation is required for investigating the physiological mechanisms that operate in the brain. Here, we developed a simple head fixation platform, which allows simultaneous chronic imaging and electrophysiological recordings to be obtained from the hippocampus of awake mice. We performed quantitative analyses of spontaneous animal behaviour, the associated network states and the cellular activities in the dorsal hippocampus as well as estimated the brain stability limits to image dendritic processes and individual axonal boutons. Ca2+ imaging recordings revealed a relatively stereotyped hippocampal activity despite a high inter-animal and inter-day variability in the mouse behavior. In addition to quiet state and locomotion behavioural patterns, the platform allowed the reliable detection of walking steps and fine speed variations. The brain motion during locomotion was limited to ~1.8 μm, thus allowing for imaging of small sub-cellular structures to be performed in parallel with recordings of network and behavioural states. This simple device extends the drug-free experimentation in vivo, enabling high-stability optophysiological experiments with single-bouton resolution in the mouse awake brain. PMID:28240275
ER-2 High Altitude Solar Cell Calibration Flights
NASA Technical Reports Server (NTRS)
Myers, Matthew; Wolford, David; Snyder, David; Piszczor, Michael
2015-01-01
Evaluation of space photovoltaics using ground-based simulators requires primary standard cells which have been characterized in a space or near-space environment. Due to the high cost inherent in testing cells in space, most primary standards are tested on high altitude fixed wing aircraft or balloons. The ER-2 test platform is the latest system developed by the Glenn Research Center (GRC) for near-space photovoltaic characterization. This system offers several improvements over GRC's current Learjet platform including higher altitude, larger testing area, onboard spectrometers, and longer flight season. The ER-2 system was developed by GRC in cooperation with NASA's Armstrong Flight Research Center (AFRC) as well as partners at the Naval Research Laboratory and Air Force Research Laboratory. The system was designed and built between June and September of 2014, with the integration and first flights taking place at AFRC's Palmdale facility in October of 2014. Three flights were made testing cells from GRC as well as commercial industry partners. Cell performance data was successfully collected on all three flights as well as solar spectra. The data was processed using a Langley extrapolation method, and performance results showed a less than half a percent variation between flights, and less than a percent variation from GRC's current Learjet test platform.
Integrated Microfluidic Lectin Barcode Platform for High-Performance Focused Glycomic Profiling
NASA Astrophysics Data System (ADS)
Shang, Yuqin; Zeng, Yun; Zeng, Yong
2016-02-01
Protein glycosylation is one of the key processes that play essential roles in biological functions and dysfunctions. However, progress in glycomics has considerably lagged behind genomics and proteomics, due in part to the enormous challenges in analysis of glycans. Here we present a new integrated and automated microfluidic lectin barcode platform to substantially improve the performance of lectin array for focused glycomic profiling. The chip design and flow control were optimized to promote the lectin-glycan binding kinetics and speed of lectin microarray. Moreover, we established an on-chip lectin assay which employs a very simple blocking method to effectively suppress the undesired background due to lectin binding of antibodies. Using this technology, we demonstrated focused differential profiling of tissue-specific glycosylation changes of a biomarker, CA125 protein purified from ovarian cancer cell line and different tissues from ovarian cancer patients in a fast, reproducible, and high-throughput fashion. Highly sensitive CA125 detection was also demonstrated with a detection limit much lower than the clinical cutoff value for cancer diagnosis. This microfluidic platform holds the potential to integrate with sample preparation functions to construct a fully integrated “sample-to-answer” microsystem for focused differential glycomic analysis. Thus, our technology should present a powerful tool in support of rapid advance in glycobiology and glyco-biomarker development.
Integrated Microfluidic Lectin Barcode Platform for High-Performance Focused Glycomic Profiling
Shang, Yuqin; Zeng, Yun; Zeng, Yong
2016-01-01
Protein glycosylation is one of the key processes that play essential roles in biological functions and dysfunctions. However, progress in glycomics has considerably lagged behind genomics and proteomics, due in part to the enormous challenges in analysis of glycans. Here we present a new integrated and automated microfluidic lectin barcode platform to substantially improve the performance of lectin array for focused glycomic profiling. The chip design and flow control were optimized to promote the lectin-glycan binding kinetics and speed of lectin microarray. Moreover, we established an on-chip lectin assay which employs a very simple blocking method to effectively suppress the undesired background due to lectin binding of antibodies. Using this technology, we demonstrated focused differential profiling of tissue-specific glycosylation changes of a biomarker, CA125 protein purified from ovarian cancer cell line and different tissues from ovarian cancer patients in a fast, reproducible, and high-throughput fashion. Highly sensitive CA125 detection was also demonstrated with a detection limit much lower than the clinical cutoff value for cancer diagnosis. This microfluidic platform holds the potential to integrate with sample preparation functions to construct a fully integrated “sample-to-answer” microsystem for focused differential glycomic analysis. Thus, our technology should present a powerful tool in support of rapid advance in glycobiology and glyco-biomarker development. PMID:26831207
Chen, Jian; Wang, Jun-Feng; Wu, Xue-Zhong; Rong, Zhen; Dong, Pei-Tao; Xiao, Rui
2018-06-01
We developed a high-performance surface-enhanced Raman scattering (SERS) sensing platform that can be used for specific and sensitive DNA detection. The SERS platform combines the advantages of Au film over nanosphere (AuFON) substrate and Ag@PATP@SiO2 SERS tag. SERS tag-on-AuFON is a sensing system that operates by the self-assembly of SERS tag onto an AuFON substrate in the presence of target DNAs. The SERS signals can be dramatically enhanced by the formation of "hot spots" in the interstices between the assembled nanostructures, as confirmed by finite-difference time-domain (FDTD) simulation. As a new sensing platform, SERS tag-on-AuFON was utilized to detect Staphylococcus aureus (S. aureus) DNA with a limit of detection at 1 nM. A linear relationship was also observed between the SERS intensity at Raman peak 1439 cm-1 and the logarithm of target DNA concentrations ranging from 1 μM to 1 nM. Besides, the sensing platform showed good homogeneity, with a relative standard deviation of about 1%. The sensitive SERS platform created in this study is a promising tool for detecting trace biochemical molecules because of its relatively simple and effective fabrication procedure, high sensitivity, and high reproducibility of the SERS effect.
The CARIBU EBIS control and synchronization system
NASA Astrophysics Data System (ADS)
Dickerson, Clayton; Peters, Christopher
2015-01-01
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. The control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.
Yu, Yao; Hu, Hao; Bohlender, Ryan J; Hu, Fulan; Chen, Jiun-Sheng; Holt, Carson; Fowler, Jerry; Guthery, Stephen L; Scheet, Paul; Hildebrandt, Michelle A T; Yandell, Mark; Huff, Chad D
2018-04-06
High-throughput sequencing data are increasingly being made available to the research community for secondary analyses, providing new opportunities for large-scale association studies. However, heterogeneity in target capture and sequencing technologies often introduce strong technological stratification biases that overwhelm subtle signals of association in studies of complex traits. Here, we introduce the Cross-Platform Association Toolkit, XPAT, which provides a suite of tools designed to support and conduct large-scale association studies with heterogeneous sequencing datasets. XPAT includes tools to support cross-platform aware variant calling, quality control filtering, gene-based association testing and rare variant effect size estimation. To evaluate the performance of XPAT, we conducted case-control association studies for three diseases, including 783 breast cancer cases, 272 ovarian cancer cases, 205 Crohn disease cases and 3507 shared controls (including 1722 females) using sequencing data from multiple sources. XPAT greatly reduced Type I error inflation in the case-control analyses, while replicating many previously identified disease-gene associations. We also show that association tests conducted with XPAT using cross-platform data have comparable performance to tests using matched platform data. XPAT enables new association studies that combine existing sequencing datasets to identify genetic loci associated with common diseases and other complex traits.
Evans, Scott R; Hujer, Andrea M; Jiang, Hongyu; Hujer, Kristine M; Hall, Thomas; Marzan, Christine; Jacobs, Michael R; Sampath, Rangarajan; Ecker, David J; Manca, Claudia; Chavda, Kalyan; Zhang, Pan; Fernandez, Helen; Chen, Liang; Mediavilla, Jose R; Hill, Carol B; Perez, Federico; Caliendo, Angela M; Fowler, Vance G; Chambers, Henry F; Kreiswirth, Barry N; Bonomo, Robert A
2016-01-15
Rapid molecular diagnostic (RMD) platforms may lead to better antibiotic use. Our objective was to develop analytical strategies to enhance the interpretation of RMDs for clinicians. We compared the performance characteristics of 4 RMD platforms for detecting resistance against β-lactams in 72 highly resistant isolates of Escherichia coli and Klebsiella pneumoniae (PRIMERS I). Subsequently, 2 platforms were used in a blinded study in which a heterogeneous collection of 196 isolates of E. coli and K. pneumoniae (PRIMERS II) were examined. We evaluated the genotypic results as predictors of resistance or susceptibility against β-lactam antibiotics. We designed analytical strategies and graphical representations of platform performance, including discrimination summary plots and susceptibility and resistance predictive values, that are readily interpretable by practitioners to inform decision-making. In PRIMERS I, the 4 RMD platforms detected β-lactamase (bla) genes and identified susceptibility or resistance in >95% of cases. In PRIMERS II, the 2 platforms identified susceptibility against extended-spectrum cephalosporins and carbapenems in >90% of cases; however, against piperacillin/tazobactam, susceptibility was identified in <80% of cases. Applying the analytical strategies to a population with 15% prevalence of ceftazidime-resistance and 5% imipenem-resistance, RMD platforms predicted susceptibility in >95% of cases, while prediction of resistance was 69%-73% for ceftazidime and 41%-50% for imipenem. RMD platforms can help inform empiric β-lactam therapy in cases where bla genes are not detected and the prevalence of resistance is known. Our analysis is a first step in bridging the gap between RMDs and empiric treatment decisions. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
A simple and reliable health monitoring system for shoulder health: proposal.
Liu, Shuo-Fang; Lee, Yann-Long
2014-02-26
The current health care system is complex and inefficient. A simple and reliable health monitoring system that can help patients perform medical self-diagnosis is seldom readily available. Because the medical system is vast and complex, it has hampered or delayed patients in seeking medical advice or treatment in a timely manner, which may potentially affect the patient's chances of recovery, especially those with severe sicknesses such as cancer, and heart disease. The purpose of this paper is to propose a methodology in designing a simple, low cost, Internet-based health-screening platform. This health-screening platform will enable patients to perform medical self-diagnosis over the Internet. Historical data has shown the importance of early detection to ensure patients receive proper treatment and speedy recovery. The platform is designed with special emphasis on the user interface. Standard Web-based user-interface design is adopted so the user feels ease to operate in a familiar Web environment. In addition, graphics such as charts and graphs are used generously to help users visualize and understand the result of the diagnostic. The system is developed using hypertext preprocessor (PHP) programming language. One important feature of this system platform is that it is built to be a stand-alone platform, which tends to have better user privacy security. The prototype system platform was developed by the National Cheng Kung University Ergonomic and Design Laboratory. The completed prototype of this system platform was submitted to the Taiwan Medical Institute for evaluation. The evaluation of 120 participants showed that this platform system is a highly effective tool in health-screening applications, and has great potential for improving the medical care quality for the general public.
Evolutionary space platform concept study. Volume 2, part B: Manned space platform concepts
NASA Technical Reports Server (NTRS)
1982-01-01
Logical, cost-effective steps in the evolution of manned space platforms are investigated and assessed. Tasks included the analysis of requirements for a manned space platform, identifying alternative concepts, performing system analysis and definition of the concepts, comparing the concepts and performing programmatic analysis for a reference concept.
Parametric study of microwave-powered high-altitude airplane platforms designed for linear flight
NASA Technical Reports Server (NTRS)
Morris, C. E. K., Jr.
1981-01-01
The performance of a class of remotely piloted, microwave powered, high altitude airplane platforms is studied. The first part of each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam; this is followed by gliding flight back to a minimum altitude above a microwave station and initiation of another cycle. Parametric variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the energy transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and the increase of lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.
How does information congruence influence diagnosis performance?
Chen, Kejin; Li, Zhizhong
2015-01-01
Diagnosis performance is critical for the safety of high-consequence industrial systems. It depends highly on the information provided, perceived, interpreted and integrated by operators. This article examines the influence of information congruence (congruent information vs. conflicting information vs. missing information) and its interaction with time pressure (high vs. low) on diagnosis performance on a simulated platform. The experimental results reveal that the participants confronted with conflicting information spent significantly more time generating correct hypotheses and rated the results with lower probability values than when confronted with the other two levels of information congruence and were more prone to arrive at a wrong diagnosis result than when they were provided with congruent information. This finding stresses the importance of the proper processing of non-congruent information in safety-critical systems. Time pressure significantly influenced display switching frequency and completion time. This result indicates the decisive role of time pressure. Practitioner Summary: This article examines the influence of information congruence and its interaction with time pressure on human diagnosis performance on a simulated platform. For complex systems in the process control industry, the results stress the importance of the proper processing of non-congruent information in safety-critical systems.
Study on verifying the angle measurement performance of the rotary-laser system
NASA Astrophysics Data System (ADS)
Zhao, Jin; Ren, Yongjie; Lin, Jiarui; Yin, Shibin; Zhu, Jigui
2018-04-01
An angle verification method to verify the angle measurement performance of the rotary-laser system was developed. Angle measurement performance has a great impact on measuring accuracy. Although there is some previous research on the verification of angle measuring uncertainty for the rotary-laser system, there are still some limitations. High-precision reference angles are used in the study of the method, and an integrated verification platform is set up to evaluate the performance of the system. This paper also probes the error that has biggest influence on the verification system. Some errors of the verification system are avoided via the experimental method, and some are compensated through the computational formula and curve fitting. Experimental results show that the angle measurement performance meets the requirement for coordinate measurement. The verification platform can evaluate the uncertainty of angle measurement for the rotary-laser system efficiently.
InP-based photonic integrated circuit platform on SiC wafer.
Takenaka, Mitsuru; Takagi, Shinichi
2017-11-27
We have numerically investigated the properties of an InP-on-SiC wafer as a photonic integrated circuit (PIC) platform. By bonding a thin InP-based semiconductor on a SiC wafer, SiC can be used as waveguide cladding, a heat sink, and a support substrate simultaneously. Since the refractive index of SiC is sufficiently low, PICs can be fabricated using InP-based strip and rib waveguides with a minimum bend radius of approximately 7 μm. High-thermal-conductivity SiC underneath an InP-based waveguide core markedly improves heat dissipation, resulting in superior thermal properties of active devices such as laser diodes. The InP-on-SiC wafer has significantly smaller thermal stress than InP-on-SiO 2 /Si wafer, which prevents the thermal degradation of InP-based devices during high-temperature processes. Thus, InP on SiC provides an ideal platform for high-performance PICs.
A Systematic Approach for Obtaining Performance on Matrix-Like Operations
NASA Astrophysics Data System (ADS)
Veras, Richard Michael
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
In Vivo High-Content Evaluation of Three-Dimensional Scaffolds Biocompatibility
Oliveira, Mariana B.; Ribeiro, Maximiano P.; Miguel, Sónia P.; Neto, Ana I.; Coutinho, Paula; Correia, Ilídio J.
2014-01-01
While developing tissue engineering strategies, inflammatory response caused by biomaterials is an unavoidable aspect to be taken into consideration, as it may be an early limiting step of tissue regeneration approaches. We demonstrate the application of flat and flexible films exhibiting patterned high-contrast wettability regions as implantable platforms for the high-content in vivo study of inflammatory response caused by biomaterials. Screening biomaterials by using high-throughput platforms is a powerful method to detect hit spots with promising properties and to exclude uninteresting conditions for targeted applications. High-content analysis of biomaterials has been mostly restricted to in vitro tests where crucial information is lost, as in vivo environment is highly complex. Conventional biomaterials implantation requires the use of high numbers of animals, leading to ethical questions and costly experimentation. Inflammatory response of biomaterials has also been highly neglected in high-throughput studies. We designed an array of 36 combinations of biomaterials based on an initial library of four polysaccharides. Biomaterials were dispensed onto biomimetic superhydrophobic platforms with wettable regions and processed as freeze-dried three-dimensional scaffolds with a high control of the array configuration. These chips were afterward implanted subcutaneously in Wistar rats. Lymphocyte recruitment and activated macrophages were studied on-chip, by performing immunocytochemistry in the miniaturized biomaterials after 24 h and 7 days of implantation. Histological cuts of the surrounding tissue of the implants were also analyzed. Localized and independent inflammatory responses were detected. The integration of these data with control data proved that these chips are robust platforms for the rapid screening of early-stage in vivo biomaterials' response. PMID:24568682
Platform for a Hydrocarbon Exhaust Gas Sensor Utilizing a Pumping Cell and a Conductometric Sensor
Biskupski, Diana; Geupel, Andrea; Wiesner, Kerstin; Fleischer, Maximilian; Moos, Ralf
2009-01-01
Very often, high-temperature operated gas sensors are cross-sensitive to oxygen and/or they cannot be operated in oxygen-deficient (rich) atmospheres. For instance, some metal oxides like Ga2O3 or doped SrTiO3 are excellent materials for conductometric hydrocarbon detection in the rough atmosphere of automotive exhausts, but have to be operated preferably at a constant oxygen concentration. We propose a modular sensor platform that combines a conductometric two-sensor-setup with an electrochemical pumping cell made of YSZ to establish a constant oxygen concentration in the ambient of the conductometric sensor film. In this paper, the platform is introduced, the two-sensor-setup is integrated into this new design, and sensing performance is characterized. Such a platform can be used for other sensor principles as well. PMID:22423212
Design of verification platform for wireless vision sensor networks
NASA Astrophysics Data System (ADS)
Ye, Juanjuan; Shang, Fei; Yu, Chuang
2017-08-01
At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.
NASA Astrophysics Data System (ADS)
Richards, C. J.; Evans, B. J. K.; Wyborn, L. A.; Wang, J.; Trenham, C. E.; Druken, K. A.
2016-12-01
The Australian National Computational Infrastructure (NCI) has ingested over 10PB of national and international environmental, Earth systems science and geophysics reference data onto a single platform to advance high performance data (HPD) techniques that enable interdisciplinary Data-intensive Science. Improved Data Stewardship is critical to evolve both data and data services that support the increasing need for programmatic usability and that prioritises interoperability rather than just traditional data download or portal access. A data platform designed for programmatic access requires quality checked collections that better utilise interoperable data formats and standards. Achieving this involves strategies to meet both the technical and `social' challenges. Aggregating datasets used by different communities and organisations requires satisfying multiple use cases for the broader research community, whilst addressing existing BAU requirements. For NCI, this requires working with data stewards to manage the process of replicating data to the common platform, community representatives and developers to confirm their requirements, and with international peers to better enable globally integrated data communities. It is particularly important to engage with representatives from each community who can work collaboratively to a common goal, as well as capture their community needs, apply quality assurance, determine any barriers to change and to understand priorities. This is critical when managing the aggregation of data collections from multiple producers with different levels of stewardship maturity, technologies and standards, and where organisational barriers can impact the transformation to interoperable and performant data access. To facilitate the management, development and operation of the HPD platform, NCI coordinates technical and domain committees made up of user representatives, data stewards and informatics experts to provide a forum to discuss, learn and advise NCI's management. This experience has been a useful collaboration and suggests that in the age of interdisciplinary HPD research, Data Stewardship is evolving from a focus on the needs of a single community to one which helps balance priorities and navigates change for multiple communities.
NASA Astrophysics Data System (ADS)
Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.
2015-11-01
High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.
Ng, Fong-Lee; Phang, Siew-Moi; Periasamy, Vengadesh; Yunus, Kamran; Fisher, Adrian C.
2014-01-01
In photosynthesis, a very small amount of the solar energy absorbed is transformed into chemical energy, while the rest is wasted as heat and fluorescence. This excess energy can be harvested through biophotovoltaic platforms to generate electrical energy. In this study, algal biofilms formed on ITO anodes were investigated for use in the algal biophotovoltaic platforms. Sixteen algal strains, comprising local isolates and two diatoms obtained from the Culture Collection of Marine Phytoplankton (CCMP), USA, were screened and eight were selected based on the growth rate, biochemical composition and photosynthesis performance using suspension cultures. Differences in biofilm formation between the eight algal strains as well as their rapid light curve (RLC) generated using a pulse amplitude modulation (PAM) fluorometer, were examined. The RLC provides detailed information on the saturation characteristics of electron transport and overall photosynthetic performance of the algae. Four algal strains, belonging to the Cyanophyta (Cyanobacteria) Synechococcus elongatus (UMACC 105), Spirulina platensis. (UMACC 159) and the Chlorophyta Chlorella vulgaris (UMACC 051), and Chlorella sp. (UMACC 313) were finally selected for investigation using biophotovoltaic platforms. Based on power output per Chl-a content, the algae can be ranked as follows: Synechococcus elongatus (UMACC 105) (6.38×10−5 Wm−2/µgChl-a)>Chlorella vulgaris UMACC 051 (2.24×10−5 Wm−2/µgChl-a)>Chlorella sp.(UMACC 313) (1.43×10−5 Wm−2/µgChl-a)>Spirulina platensis (UMACC 159) (4.90×10−6 Wm−2/µgChl-a). Our study showed that local algal strains have potential for use in biophotovoltaic platforms due to their high photosynthetic performance, ability to produce biofilm and generation of electrical power. PMID:24874081
NASA Astrophysics Data System (ADS)
Vrancken, D.; Paijmans, B.; Fussen, D.; Neefs, E.; Loodts, N.; Dekemper, E.; Vahellemont, F.; Devos, L.; Moelans, W.; Nevejans, D.; Schroeven-Deceuninck, H.; Bernaerts, D.; Zender, J.
2008-08-01
There is more and more interest in the understanding and the monitoring of the physics and chemistry of the Earth's atmosphere and its impact on the climate change. Currently a significantly high number of sounders provide the required data to monitor the changes in atmosphere composition, but a dramatic drop in operational atmosphere monitoring missions is expected around 2010. This drop is mainly visible in sounders capable of a high vertical resolution. Currently, instruments on ENVISAT and METOP provide relevant data but this is envisaged to be insufficient to ensure full spatial and temporal coverage and redundancy in the measurement data set. ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) is a remote sounding experiment proposed by the Belgian Institute for Space Aeronomy (BIRA/IASB) for which a feasibility study was initiated with BELSPO (Belgian Science Policy) and ESA support. The main objective of this study phase was to establish a mission concept, to define the required payload and to establish a satellite platform design. The study was led by the BIRA/IASB team and performed in close collaboration with OIP (payload developer) and Verhaert Space (spacecraft developer). The mission scenario includes bright limb observations in basically all directions, solar occultations around the terminator passages and star occultations during eclipse. These observation modes allow imaging the atmosphere with a high vertical resolution. The spacecraft will be operated in a 10:00 sun-synchronous orbit at an altitude of 695 km, allowing a 3-day revisit time. The envisaged payload for the ALTIUS mission is an imaging spectrometer, observing in the UV, the VIS and the NIR spectral ranges. For each spectral range, an AOTF (Acousto-Optical Tunable Filter) will permit to perform observations of selectable small wavelength domains. A typical set of 10 wavelengths will be recorded within 1 second. The different operational modes impose a high agility capability on the platform. Furthermore, the quasi- continuous monitoring by the payload will drive the design of the platform in terms of power and downlink capabilities. The mission will be performed using a derivative of the PROBA platform, developed by Verhaert Space. This paper will present the mission requirements for the ALTIUS mission, the envisaged instrument, the spacecraft concept design and the related mission analysis.
On-chip generation of high-dimensional entangled quantum states and their coherent control
NASA Astrophysics Data System (ADS)
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-01
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
On-chip generation of high-dimensional entangled quantum states and their coherent control.
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-28
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
Compact fiber optic gyroscopes for platform stabilization
NASA Astrophysics Data System (ADS)
Dickson, William C.; Yee, Ting K.; Coward, James F.; McClaren, Andrew; Pechner, David A.
2013-09-01
SA Photonics has developed a family of compact Fiber Optic Gyroscopes (FOGs) for platform stabilization applications. The use of short fiber coils enables the high update rates required for stabilization applications but presents challenges to maintain high performance. We are able to match the performance of much larger FOGs by utilizing several innovative technologies. These technologies include source noise reduction to minimize Angular Random Walk (ARW), advanced digital signal processing that minimizes bias drift at high update rates, and advanced passive thermal packaging that minimizes temperature induced bias drift while not significantly affecting size, weight, or power. In addition, SA Photonics has developed unique distributed FOG packaging technologies allowing the FOG electronics and photonics to be packaged remotely from the sensor head or independent axis heads to minimize size, weight, and power at the sensing location(s). The use of these technologies has resulted in high performance, including ARW less than 0.001 deg/rt-hr and bias drift less than 0.004 deg/hr at an update rate of 10 kHz, and total packaged volume less than 30 cu. in. for a 6 degree of freedom FOG-based IMU. Specific applications include optical beam stabilization for LIDAR and LADAR, beam stabilization for long-range free-space optical communication, Optical Inertial Reference Units for HEL stabilization, and Ka band antenna pedestal pointing and stabilization. The high performance of our FOGs also enables their use in traditional navigation and positioning applications. This paper will review the technologies enabling our high-performance compact FOGs, and will provide performance test results.
Crick, Alex J; Cammarota, Eugenia; Moulang, Katie; Kotar, Jurij; Cicuta, Pietro
2015-01-01
Live optical microscopy has become an essential tool for studying the dynamical behaviors and variability of single cells, and cell-cell interactions. However, experiments and data analysis in this area are often extremely labor intensive, and it has often not been achievable or practical to perform properly standardized experiments on a statistically viable scale. We have addressed this challenge by developing automated live imaging platforms, to help standardize experiments, increasing throughput, and unlocking previously impossible ones. Our real-time cell tracking programs communicate in feedback with microscope and camera control software, and they are highly customizable, flexible, and efficient. As examples of our current research which utilize these automated platforms, we describe two quite different applications: egress-invasion interactions of malaria parasites and red blood cells, and imaging of immune cells which possess high motility and internal dynamics. The automated imaging platforms are able to track a large number of motile cells simultaneously, over hours or even days at a time, greatly increasing data throughput and opening up new experimental possibilities. Copyright © 2015 Elsevier Inc. All rights reserved.
Linear phase conjugation for atmospheric aberration compensation
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Stappaerts, Eddy A.
1998-01-01
Atmospheric induced aberrations can seriously degrade laser performance, greatly affecting the beam that finally reaches the target. Lasers propagated over any distance in the atmosphere suffer from a significant decrease in fluence at the target due to these aberrations. This is especially so for propagation over long distances. It is due primarily to fluctuations in the atmosphere over the propagation path, and from platform motion relative to the intended aimpoint. Also, delivery of high fluence to the target typically requires low beam divergence, thus, atmospheric turbulence, platform motion, or both results in a lack of fine aimpoint control to keep the beam directed at the target. To improve both the beam quality and amount of laser energy delivered to the target, Northrop Grumman has developed the Active Tracking System (ATS); a novel linear phase conjugation aberration compensation technique. Utilizing a silicon spatial light modulator (SLM) as a dynamic wavefront reversing element, ATS undoes aberrations induced by the atmosphere, platform motion or both. ATS continually tracks the target as well as compensates for atmospheric and platform motion induced aberrations. This results in a high fidelity, near-diffraction limited beam delivered to the target.
Retention in porous layer pillar array planar separation platforms
Lincoln, Danielle R.; Lavrik, Nickolay V.; Kravchenko, Ivan I.; ...
2016-08-11
Here, this work presents the retention capabilities and surface area enhancement of highly ordered, high-aspect-ratio, open-platform, two-dimensional (2D) pillar arrays when coated with a thin layer of porous silicon oxide (PSO). Photolithographically prepared pillar arrays were coated with 50–250 nm of PSO via plasma-enhanced chemical vapor deposition and then functionalized with either octadecyltrichlorosilane or n-butyldimethylchlorosilane. Theoretical calculations indicate that a 50 nm layer of PSO increases the surface area of a pillar nearly 120-fold. Retention capabilities were tested by observing capillary-action-driven development under various conditions, as well as by running one-dimensional separations on varying thicknesses of PSO. Increasing the thicknessmore » of PSO on an array clearly resulted in greater retention of the analyte(s) in question in both experiments. In culmination, a two-dimensional separation of fluorescently derivatized amines was performed to further demonstrate the capabilities of these fabricated platforms.« less
Crescentini, Marco; Thei, Frederico; Bennati, Marco; Saha, Shimul; de Planque, Maurits R R; Morgan, Hywel; Tartagni, Marco
2015-06-01
Lipid bilayer membrane (BLM) arrays are required for high throughput analysis, for example drug screening or advanced DNA sequencing. Complex microfluidic devices are being developed but these are restricted in terms of array size and structure or have integrated electronic sensing with limited noise performance. We present a compact and scalable multichannel electrophysiology platform based on a hybrid approach that combines integrated state-of-the-art microelectronics with low-cost disposable fluidics providing a platform for high-quality parallel single ion channel recording. Specifically, we have developed a new integrated circuit amplifier based on a novel noise cancellation scheme that eliminates flicker noise derived from devices under test and amplifiers. The system is demonstrated through the simultaneous recording of ion channel activity from eight bilayer membranes. The platform is scalable and could be extended to much larger array sizes, limited only by electronic data decimation and communication capabilities.
Cosmic microwave background science at commercial airline altitudes
NASA Astrophysics Data System (ADS)
Feeney, Stephen M.; Gudmundsson, Jon E.; Peiris, Hiranya V.; Verde, Licia; Errard, Josquin
2017-07-01
Obtaining high-sensitivity measurements of degree-scale cosmic microwave background (CMB) polarization is the most direct path to detecting primordial gravitational waves. Robustly recovering any primordial signal from the dominant foreground emission will require high-fidelity observations at multiple frequencies, with excellent control of systematics. We explore the potential for a new platform for CMB observations, the Airlander 10 hybrid air vehicle, to perform this task. We show that the Airlander 10 platform, operating at commercial airline altitudes, is well suited to mapping frequencies above 220 GHz, which are critical for cleaning CMB maps of dust emission. Optimizing the distribution of detectors across frequencies, we forecast the ability of Airlander 10 to clean foregrounds of varying complexity as a function of altitude, demonstrating its complementarity with both existing (Planck) and ongoing (C-BASS) foreground observations. This novel platform could play a key role in defining our ultimate view of the polarized microwave sky.
Development of an optical inspection platform for surface defect detection in touch panel glass
NASA Astrophysics Data System (ADS)
Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu
2016-04-01
An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.
NASA Astrophysics Data System (ADS)
Meng, Qizhi; Xie, Fugui; Liu, Xin-Jun
2018-06-01
This paper deals with the conceptual design, kinematic analysis and workspace identification of a novel four degrees-of-freedom (DOFs) high-speed spatial parallel robot for pick-and-place operations. The proposed spatial parallel robot consists of a base, four arms and a 1½ mobile platform. The mobile platform is a major innovation that avoids output singularity and offers the advantages of both single and double platforms. To investigate the characteristics of the robot's DOFs, a line graph method based on Grassmann line geometry is adopted in mobility analysis. In addition, the inverse kinematics is derived, and the constraint conditions to identify the correct solution are also provided. On the basis of the proposed concept, the workspace of the robot is identified using a set of presupposed parameters by taking input and output transmission index as the performance evaluation criteria.
Retention in porous layer pillar array planar separation platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Danielle R.; Lavrik, Nickolay V.; Kravchenko, Ivan I.
Here, this work presents the retention capabilities and surface area enhancement of highly ordered, high-aspect-ratio, open-platform, two-dimensional (2D) pillar arrays when coated with a thin layer of porous silicon oxide (PSO). Photolithographically prepared pillar arrays were coated with 50–250 nm of PSO via plasma-enhanced chemical vapor deposition and then functionalized with either octadecyltrichlorosilane or n-butyldimethylchlorosilane. Theoretical calculations indicate that a 50 nm layer of PSO increases the surface area of a pillar nearly 120-fold. Retention capabilities were tested by observing capillary-action-driven development under various conditions, as well as by running one-dimensional separations on varying thicknesses of PSO. Increasing the thicknessmore » of PSO on an array clearly resulted in greater retention of the analyte(s) in question in both experiments. In culmination, a two-dimensional separation of fluorescently derivatized amines was performed to further demonstrate the capabilities of these fabricated platforms.« less
The touchscreen operant platform for testing learning and memory in rats and mice
Horner, Alexa E.; Heath, Christopher J.; Hvoslef-Eide, Martha; Kent, Brianne A.; Kim, Chi Hun; Nilsson, Simon R. O.; Alsiö, Johan; Oomen, Charlotte A.; Holmes, Andrew; Saksida, Lisa M.; Bussey, Timothy J.
2014-01-01
Summary An increasingly popular method of assessing cognitive functions in rodents is the automated touchscreen platform, on which a number of different cognitive tests can be run in a manner very similar to touchscreen methods currently used to test human subjects. This methodology is low stress (using appetitive, rather than aversive reinforcement), has high translational potential, and lends itself to a high degree of standardisation and throughput. Applications include the study of cognition in rodent models of psychiatric and neurodegenerative diseases (e.g., Alzheimer’s disease, schizophrenia, Huntington’s disease, frontotemporal dementia), and characterisation of the role of select brain regions, neurotransmitter systems and genes in rodents. This protocol describes how to perform four touchscreen assays of learning and memory: Visual Discrimination, Object-Location Paired-Associates Learning, Visuomotor Conditional Learning and Autoshaping. It is accompanied by two further protocols using the touchscreen platform to assess executive function, working memory and pattern separation. PMID:24051959
Placek, Sarah B; Franklin, Brenton R; Haviland, Sarah M; Wagner, Mercy D; O'Donnell, Mary T; Cryer, Chad T; Trinca, Kristen D; Silverman, Elliott; Matthew Ritter, E
2017-06-01
Using previously established mastery learning standards, this study compares outcomes of training on standard FLS (FLS) equipment with training on an ergonomically different (ED-FLS), but more portable, lower cost platform. Subjects completed a pre-training FLS skills test on the standard platform and were then randomized to train on the FLS training platform (n = 20) or the ED-FLS platform (n = 19). A post-training FLS skills test was administered to both groups on the standard FLS platform. Group performance on the pretest was similar. Fifty percent of FLS and 32 % of ED-FLS subjects completed the entire curriculum. 100 % of subjects completing the curriculum achieved passing scores on the post-training test. There was no statistically discernible difference in scores on the final FLS exam (FLS 93.4, ED-FLS 93.3, p = 0.98) or training sessions required to complete the curriculum (FLS 7.4, ED-FLS 9.8, p = 0.13). These results show that when applying mastery learning theory to an ergonomically different platform, skill transfer occurs at a high level and prepares subjects to pass the standard FLS skills test.
Byeon, Ji-Yeon; Bailey, Ryan C
2011-09-07
High affinity capture agents recognizing biomolecular targets are essential in the performance of many proteomic detection methods. Herein, we report the application of a label-free silicon photonic biomolecular analysis platform for simultaneously determining kinetic association and dissociation constants for two representative protein capture agents: a thrombin-binding DNA aptamer and an anti-thrombin monoclonal antibody. The scalability and inherent multiplexing capability of the technology make it an attractive platform for simultaneously evaluating the binding characteristics of multiple capture agents recognizing the same target antigen, and thus a tool complementary to emerging high-throughput capture agent generation strategies.
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Messer, II, Otis E
2017-01-02
The Bellerophon software system supports CHIMERA, a production-level HPC application that simulates the evolution of core-collapse supernovae. Bellerophon enables CHIMERA's geographically dispersed team of collaborators to perform job monitoring and real-time data analysis from multiple supercomputing resources, including platforms at OLCF, NERSC, and NICS. Its multi-tier architecture provides an encapsulated, end-to-end software solution that enables the CHIMERA team to quickly and easily access highly customizable animated and static views of results from anywhere in the world via a cross-platform desktop application.
Imaging cytometry in a plastic ultra-mobile system
NASA Astrophysics Data System (ADS)
Martínez Vázquez, R.; Trotta, G.; Paturzo, M.; Volpe, A.; Bernava, G.; Basile, V.; Ancona, A.; Ferraro, P.; Fassi, I.; Osellame, R.
2017-03-01
We present a cost-effective and highly-portable plastic prototype that can be interfaced with a cell phone to implement an optofluidic imaging cytometry platform. It is based on a PMMA microfluidic chip that fits inside an opto-mechanical platform fabricated by a 3D printer. The fluorescence excitation and imaging is performed using the LED and the CMOS from the cell phone increasing the compactness of the system. A custom developed application is used to analyze the images and provide a value of particle concentration.
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Zeise, Frederick F.
1993-07-01
A second generation digital optical computer (DOC II) has been developed which utilizes a RISC based operating system as its host. This 32 bit, high performance (12.8 GByte/sec), computing platform demonstrates a number of basic principals that are inherent to parallel free space optical interconnects such as speed (up to 1012 bit operations per second) and low power 1.2 fJ per bit). Although DOC II is a general purpose machine, special purpose applications have been developed and are currently being evaluated on the optical platform.
Jahan-Tigh, Richard R; Chinn, Garrett M; Rapini, Ronald P
2016-01-01
The incorporation of high-resolution cameras into smartphones has allowed for a variety of medical applications including the use of lens attachments that provide telescopic, macroscopic, and dermatoscopic data, but the feasibility and performance characteristics of such a platform for use in dermatopathology have not been described. To determine the diagnostic performance of a smartphone microscope compared to traditional light microscopy in dermatopathology specimens. A simple smartphone microscope constructed with a 3-mm ball lens was used to prospectively evaluate 1021 consecutive dermatopathology cases in a blinded fashion. Referred, consecutive specimens from the community were evaluated at a single university hospital. The performance characteristics of the smartphone platform were calculated by using conventional light microscopy as the gold standard. The sensitivity and specificity for the diagnosis of melanoma, nonmelanoma skin cancers, and other miscellaneous conditions by the phone microscopy platform, as compared with traditional light microscopy, were calculated. For basal cell carcinoma (n = 136), the sensitivity and specificity of smartphone microscopy were 95.6% and 98.1%, respectively. The sensitivity and specificity for squamous cell carcinoma (n = 94) were 89.4% and 97.3%, respectively. The lowest sensitivity was found in melanoma (n = 15) at 60%, although the specificity was high at 99.1%. The accuracy of diagnosis of inflammatory conditions and other neoplasms was variable. Mobile phone-based microscopy has excellent performance characteristics for the inexpensive diagnosis of nonmelanoma skin cancers in a setting where a traditional microscope is not available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Anderson; David Culler; James Demmel
2000-02-16
The goal of the Castle project was to provide a parallel programming environment that enables the construction of high performance applications that run portably across many platforms. The authors approach was to design and implement a multilayered architecture, with higher levels building on lower ones to ensure portability, but with care taken not to introduce abstractions that sacrifice performance.
Vertical-cavity surface-emitting lasers come of age
NASA Astrophysics Data System (ADS)
Morgan, Robert A.; Lehman, John A.; Hibbs-Brenner, Mary K.
1996-04-01
This manuscript reviews our efforts in demonstrating state-of-the-art planar, batch-fabricable, high-performance vertical-cavity surface-emitting lasers (VCSELs). All performance requirements for short-haul data communication applications are clearly established. We concentrate on the flexibility of the established proton-implanted AlGaAs-based (emitting near 850 nm) technology platform, focusing on a standard device design. This structure is shown to meet or exceed performance and producibility requirements. These include > 99% device yield across 3-in-dia. metal-organic vapor phase epitaxy (MOVPE)-grown wafers and wavelength operation across a > 100-nm range. Recent progress in device performance [low threshold voltage (Vth equals 1.53 V); threshold current (Ith equals 0.68 mA); continuous wave (CW) power (Pcw equals 59 mW); maximum and minimum CW lasing temperature (T equals 200 degree(s)C, 10 K); and wall-plug efficiencies ((eta) wp equals 28%)] should enable great advances in VCSEL-based technologies. We also discuss the viability of VCSELs in cryogenic and avionic/military environments. Also reviewed is a novel technique, modifying this established platform, to engineer low-threshold, high-speed, single- mode VCSELs.
Computerized dynamic posturography: the influence of platform stability on postural control.
Palm, Hans-Georg; Lang, Patricia; Strobel, Johannes; Riesner, Hans-Joachim; Friemert, Benedikt
2014-01-01
Postural stability can be quantified using posturography systems, which allow different foot platform stability settings to be selected. It is unclear, however, how platform stability and postural control are mathematically correlated. Twenty subjects performed tests on the Biodex Stability System at all 13 stability levels. Overall stability index, medial-lateral stability index, and anterior-posterior stability index scores were calculated, and data were analyzed using analysis of variance and linear regression analysis. A decrease in platform stability from the static level to the second least stable level was associated with a linear decrease in postural control. The overall stability index scores were 1.5 ± 0.8 degrees (static), 2.2 ± 0.9 degrees (level 8), and 3.6 ± 1.7 degrees (level 2). The slope of the regression lines was 0.17 for the men and 0.10 for the women. A linear correlation was demonstrated between platform stability and postural control. The influence of stability levels seems to be almost twice as high in men as in women.
Graphene-bimetal plasmonic platform for ultra-sensitive biosensing
NASA Astrophysics Data System (ADS)
Tong, Jinguang; Jiang, Li; Chen, Huifang; Wang, Yiqin; Yong, Ken-Tye; Forsberg, Erik; He, Sailing
2018-03-01
A graphene-bimetal plasmonic platform for surface plasmon resonance biosensing with ultra-high sensitivity was proposed and optimized. In this hybrid configuration, graphene nanosheets was employed to effectively absorb the excitation light and serve as biomolecular recognition elements for increased adsorption of analytes. Coating of an additional Au film prevents oxidation of the Ag substrate during manufacturing process and enhances the sensitivity at the same time. Thus, a bimetal Au-Ag substrate enables improved sensing performance and promotes stability of this plasmonic sensor. In this work we optimized the number of graphene layers as well as the thickness of the Au film and the Ag substrate based on the phase-interrogation sensitivity. We found an optimized configuration consisting of 6 layers of graphene coated on a bimetal surface consisting of a 5 nm Au film and a 30 nm Ag film. The calculation results showed the configuration could achieve a phase sensitivity as high as 1 . 71 × 106 deg/RIU, which was more than 2 orders of magnitude higher than that of bimetal structure and graphene-silver structure. Due to this enhanced sensing performance, the graphene-bimetal plasmonic platform proposed in this paper is potential for ultra-sensitive plasmonic sensing.
Zhang, Rujing; Li, Xiao; Zhang, Li; Lin, Shuyuan
2016-01-01
It is of great significance to design a platform with large surface area and high electrical conductivity for poorly conductive catalyst for hydrogen evolution reaction (HER), such as molybdenum sulfide (MoSx), a promising and cost‐effective nonprecious material. Here, the design and preparation of a free‐standing and tunable graphene mesoporous structure/single‐walled carbon nanotube (GMS/SWCNT) hybrid membrane is reported. Amorphous MoSx is electrodeposited on this platform through a wet chemical process under mild temperature. For MoSx@GMS/SWCNT hybrid electrode with a low catalyst loading of 32 μg cm−2, the onset potential is near 113 mV versus reversible hydrogen electrode (RHE) and a high current density of ≈71 mA cm−2 is achieved at 250 mV versus RHE. The excellent HER performance can be attributed to the large surface area for MoSx deposition, as well as the efficient electron transport and abundant active sites on the amorphous MoSx surface. This novel catalyst is found to outperform most previously reported MoSx‐based HER catalysts. Moreover, the flexibility of the electrode facilitates its stable catalytic performance even in extremely distorted states. PMID:27980998
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
Mixed-signal 0.18μm CMOS and SiGe BiCMOS foundry technologies for ROIC applications
NASA Astrophysics Data System (ADS)
Kar-Roy, Arjun; Howard, David; Racanelli, Marco; Scott, Mike; Hurwitz, Paul; Zwingman, Robert; Chaudhry, Samir; Jordan, Scott
2010-10-01
Today's readout integrated-circuits (ROICs) require a high level of integration of high performance analog and low power digital logic. TowerJazz offers a commercial 0.18μm CMOS technology platform for mixed-signal, RF, and high performance analog applications which can be used for ROIC applications. The commercial CA18HD dual gate oxide 1.8V/3.3V and CA18HA dual gate oxide 1.8V/5V RF/mixed signal processes, consisting of six layers of metallization, have high density stacked linear MIM capacitors, high-value resistors, triple-well isolation and thick top aluminum metal. The CA18HA process also has scalable drain extended LDMOS devices, up to 40V Vds, for high-voltage sensor applications, and high-performance bipolars for low noise requirements in ROICs. Also discussed are the available features of the commercial SBC18 SiGe BiCMOS platform with SiGe NPNs operating up to 200/200GHz (fT/fMAX frequencies in manufacturing and demonstrated to 270 GHz fT, for reduced noise and integrated RF capabilities which could be used in ROICs. Implementation of these technologies in a thick film SOI process for integrated RF switch and power management and the availability of high fT vertical PNPs to enable complementary BiCMOS (CBiCMOS), for RF enabled ROICs, are also described in this paper.
Status of development of LCOS projection displays for F-22A, F/A-18E/F, and JSF cockpits
NASA Astrophysics Data System (ADS)
Kalmanash, Michael H.
2001-09-01
Projection display technology has been found to be an attractive alternative to direct view flat panel displays in many avionics applications. The projection approach permits compact high performance systems to be tailored to specific platform needs while using a complement of commercial off the shelf (COTS) components, including liquid crystal on silicon (LCOS) microdisplay imagers. A common projection engine used on multiple platforms enables improved performance, lower cost and shorter development cycles. This paper provides a status update for projection displays under development for the F-22A, the F/A-18E/F and the Lockheed Joint Strike Fighter (JSF) aircraft.
On-chip photonic particle sensor
NASA Astrophysics Data System (ADS)
Singh, Robin; Ma, Danhao; Agarwal, Anu; Anthony, Brian
2018-02-01
We propose an on-chip photonic particle sensor design that can perform particle sizing and counting for various environmental applications. The sensor is based on micro photonic ring resonators that are able to detect the presence of the free space particles through the interaction with their evanescent electric field tail. The sensor can characterize a wide range of the particle size ranging from a few nano meters to micron ( 1 micron). The photonic platform offers high sensitivity, compactness, fast response of the device. Further, FDTD simulations are performed to analyze different particle-light interactions. Such a compact and portable platform, packaged with integrated photonic circuit provides a useful sensing modality in space shuttle and environmental applications.
George, Sherine; Chaudhery, Vikram; Lu, Meng; Takagi, Miki; Amro, Nabil; Pokhriyal, Anusha; Tan, Yafang; Ferreira, Placid; Cunningham, Brian T.
2013-01-01
Enhancement of the fluorescent output of surface-based fluorescence assays by performing them upon nanostructured photonic crystal (PC) surfaces has been demonstrated to increase signal intensities by >8000×. Using the multiplicative effects of optical resonant coupling to the PC in increasing the electric field intensity experienced by fluorescent labels (“enhanced excitation”) and the spatially biased funneling of fluorophore emissions through coupling to PC resonances (“enhanced extraction”), PC enhanced fluorescence (PCEF) can be adapted to reduce the limits of detection of disease biomarker assays, and to reduce the size and cost of high sensitivity detection instrumentation. In this work, we demonstrate the first silicon-based PCEF detection platform for multiplexed biomarker assay. The sensor in this platform is a silicon-based PC structure, comprised of a SiO2 grating that is overcoated with a thin film of high refractive index TiO2 and is produced in a semiconductor foundry for low cost, uniform, and reproducible manufacturing. The compact detection instrument that completes this platform was designed to efficiently couples fluorescence excitation from a semiconductor laser to the resonant optical modes of the PC, resulting in elevated electric field strength that is highly concentrated within the region <100 nm from the PC surface. This instrument utilizes a cylindrically focused line to scan a microarray in <1 minute. To demonstrate the capabilities of this sensor-detector platform, microspot fluorescent sandwich immunoassays using secondary antibodies labeled with Cy5 for two cancer biomarkers (TNF-α and IL-3) were performed. Biomarkers were detected at concentrations as low as 0.1 pM. In a fluorescent microarray for detection of a breast cancer miRNA biomarker miR-21, the miRNA was detectable at a concentration of 0.6 pM. PMID:23963502
Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.
Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo
2018-02-23
The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.
NASA Astrophysics Data System (ADS)
Siddiqui, Aleem; Reinke, Charles; Shin, Heedeuk; Jarecki, Robert L.; Starbuck, Andrew L.; Rakich, Peter
2017-05-01
The performance of electronic systems for radio-frequency (RF) spectrum analysis is critical for agile radar and communications systems, ISR (intelligence, surveillance, and reconnaissance) operations in challenging electromagnetic (EM) environments, and EM-environment situational awareness. While considerable progress has been made in size, weight, and power (SWaP) and performance metrics in conventional RF technology platforms, fundamental limits make continued improvements increasingly difficult. Alternatively, we propose employing cascaded transduction processes in a chip-scale nano-optomechanical system (NOMS) to achieve a spectral sensor with exceptional signal-linearity, high dynamic range, narrow spectral resolution and ultra-fast sweep times. By leveraging the optimal capabilities of photons and phonons, the system we pursue in this work has performance metrics scalable well beyond the fundamental limitations inherent to all electronic systems. In our device architecture, information processing is performed on wide-bandwidth RF-modulated optical signals by photon-mediated phononic transduction of the modulation to the acoustical-domain for narrow-band filtering, and then back to the optical-domain by phonon-mediated phase modulation (the reverse process). Here, we rely on photonics to efficiently distribute signals for parallel processing, and on phononics for effective and flexible RF-frequency manipulation. This technology is used to create RF-filters that are insensitive to the optical wavelength, with wide center frequency bandwidth selectivity (1-100GHz), ultra-narrow filter bandwidth (1-100MHz), and high dynamic range (70dB), which we will present. Additionally, using this filter as a building block, we will discuss current results and progress toward demonstrating a multichannel-filter with a bandwidth of < 10MHz per channel, while minimizing cumulative optical/acoustic/optical transduced insertion-loss to ideally < 10dB. These proposed metric represent significant improvements over RF-platforms.
Pine, P Scott; Munro, Sarah A; Parsons, Jerod R; McDaniel, Jennifer; Lucas, Anne Bergstrom; Lozach, Jean; Myers, Timothy G; Su, Qin; Jacobs-Helber, Sarah M; Salit, Marc
2016-06-24
Highly multiplexed assays for quantitation of RNA transcripts are being used in many areas of biology and medicine. Using data generated by these transcriptomic assays requires measurement assurance with appropriate controls. Methods to prototype and evaluate multiple RNA controls were developed as part of the External RNA Controls Consortium (ERCC) assessment process. These approaches included a modified Latin square design to provide a broad dynamic range of relative abundance with known differences between four complex pools of ERCC RNA transcripts spiked into a human liver total RNA background. ERCC pools were analyzed on four different microarray platforms: Agilent 1- and 2-color, Illumina bead, and NIAID lab-made spotted microarrays; and two different second-generation sequencing platforms: the Life Technologies 5500xl and the Illumina HiSeq 2500. Individual ERCC controls were assessed for reproducible performance in signal response to concentration among the platforms. Most demonstrated linear behavior if they were not located near one of the extremes of the dynamic range. Performance issues with any individual ERCC transcript could be attributed to detection limitations, platform-specific target probe issues, or potential mixing errors. Collectively, these pools of spike-in RNA controls were evaluated for suitability as surrogates for endogenous transcripts to interrogate the performance of the RNA measurement process of each platform. The controls were useful for establishing the dynamic range of the assay, as well as delineating the useable region of that range where differential expression measurements, expressed as ratios, would be expected to be accurate. The modified Latin square design presented here uses a composite testing scheme for the evaluation of multiple performance characteristics: linear performance of individual controls, signal response within dynamic range pools of controls, and ratio detection between pairs of dynamic range pools. This compact design provides an economical sample format for the evaluation of multiple external RNA controls within a single experiment per platform. These results indicate that well-designed pools of RNA controls, spiked into samples, provide measurement assurance for endogenous gene expression studies.
Cevenini, Luca; Calabretta, Maria Maddalena; Lopreside, Antonia; Tarantino, Giuseppe; Tassoni, Annalisa; Ferri, Maura; Roda, Aldo; Michelini, Elisa
2016-12-01
The availability of smartphones with high-performance digital image sensors and processing power has completely reshaped the landscape of point-of-need analysis. Thanks to the high maturity level of reporter gene technology and the availability of several bioluminescent proteins with improved features, we were able to develop a bioluminescence smartphone-based biosensing platform exploiting the highly sensitive NanoLuc luciferase as reporter. A 3D-printed smartphone-integrated cell biosensor based on genetically engineered Hek293T cells was developed. Quantitative assessment of (anti)-inflammatory activity and toxicity of liquid samples was performed with a simple and rapid add-and-measure procedure. White grape pomace extracts, known to contain several bioactive compounds, were analyzed, confirming the suitability of the smartphone biosensing platform for analysis of untreated complex biological matrices. Such approach could meet the needs of small medium enterprises lacking fully equipped laboratories for first-level safety tests and rapid screening of new bioactive products. Graphical abstract Smartphone-based bioluminescence cell biosensor.
Sand-control completion design, installation, and performance in high-rate gas wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, R.C.; Boggan, S.A.
1998-09-01
The Jupiter fields consist of a number of separate Rotliegendes gas reservoirs located approximately 90 miles off the Lincolnshire coast of the UK. The fields that make up Jupiter are Ganymede, Calisto, Europa, Sinope, and Thebe. Originally discovered in 1970, initial appraisal wells indicated poor reservoir properties and low deliverabilities. Development was postponed until a reappraisal of the area in the 1990`s indicated significant upside potential. The initial phase of the Jupiter development plan called for development of Ganymede and Calisto fields, with subsequent phases tying in Europa and Thebe. Initial development planning indicated a need for high field deliverabilitymore » at low capital cost to meet economic targets. A small number of high-rate-potential wells were to be used to deplete the reservoir. Ganymede would be developed by use of a 10-slot platform and Calisto would be developed subsea and tied back to the Ganymede platform. The paper discusses the reservoir, formation assessment, productivity design, drilling design, screen installation, and completion performance.« less
Results from a new 193nm die-to-database reticle inspection platform
NASA Astrophysics Data System (ADS)
Broadbent, William H.; Alles, David S.; Giusti, Michael T.; Kvamme, Damon F.; Shi, Rui-fang; Sousa, Weston L.; Walsh, Robert; Xiong, Yalin
2010-05-01
A new 193nm wavelength high resolution reticle defect inspection platform has been developed for both die-to-database and die-to-die inspection modes. In its initial configuration, this innovative platform has been designed to meet the reticle qualification requirements of the IC industry for the 22nm logic and 3xhp memory generations (and shrinks) with planned extensions to the next generation. The 22nm/3xhp IC generation includes advanced 193nm optical lithography using conventional RET, advanced computational lithography, and double patterning. Further, EUV pilot line lithography is beginning. This advanced 193nm inspection platform has world-class performance and the capability to meet these diverse needs in optical and EUV lithography. The architecture of the new 193nm inspection platform is described. Die-to-database inspection results are shown on a variety of reticles from industry sources; these reticles include standard programmed defect test reticles, as well as advanced optical and EUV product and product-like reticles. Results show high sensitivity and low false and nuisance detections on complex optical reticle designs and small feature size EUV reticles. A direct comparison with the existing industry standard 257nm wavelength inspection system shows measurable sensitivity improvement for small feature sizes
NASA Astrophysics Data System (ADS)
Esch, T.; Asamer, H.; Boettcher, M.; Brito, F.; Hirner, A.; Marconcini, M.; Mathot, E.; Metz, A.; Permana, H.; Soukop, T.; Stanek, F.; Kuchar, S.; Zeidler, J.; Balhar, J.
2016-06-01
The Sentinel fleet will provide a so-far unique coverage with Earth observation data and therewith new opportunities for the implementation of methodologies to generate innovative geo-information products and services. It is here where the TEP Urban project is supposed to initiate a step change by providing an open and participatory platform based on modern ICT technologies and services that enables any interested user to easily exploit Earth observation data pools, in particular those of the Sentinel missions, and derive thematic information on the status and development of the built environment from these data. Key component of TEP Urban project is the implementation of a web-based platform employing distributed high-level computing infrastructures and providing key functionalities for i) high-performance access to satellite imagery and derived thematic data, ii) modular and generic state-of-the art pre-processing, analysis, and visualization techniques, iii) customized development and dissemination of algorithms, products and services, and iv) networking and communication. This contribution introduces the main facts about the TEP Urban project, including a description of the general objectives, the platform systems design and functionalities, and the preliminary portfolio products and services available at the TEP Urban platform.
Radiation Hardening by Software Techniques on FPGAs: Flight Experiment Evaluation and Results
NASA Technical Reports Server (NTRS)
Schmidt, Andrew G.; Flatley, Thomas
2017-01-01
We present our work on implementing Radiation Hardening by Software (RHBSW) techniques on the Xilinx Virtex5 FPGAs PowerPC 440 processors on the SpaceCube 2.0 platform. The techniques have been matured and tested through simulation modeling, fault emulation, laser fault injection and now in a flight experiment, as part of the Space Test Program- Houston 4-ISS SpaceCube Experiment 2.0 (STP-H4-ISE 2.0). This work leverages concepts such as heartbeat monitoring, control flow assertions, and checkpointing, commonly used in the High Performance Computing industry, and adapts them for use in remote sensing embedded systems. These techniques are extremely low overhead (typically <1.3%), enabling a 3.3x gain in processing performance as compared to the equivalent traditionally radiation hardened processor. The recently concluded STP-H4 flight experiment was an opportunity to upgrade the RHBSW techniques for the Virtex5 FPGA and demonstrate them on-board the ISS to achieve TRL 7. This work details the implementation of the RHBSW techniques, that were previously developed for the Virtex4-based SpaceCube 1.0 platform, on the Virtex5-based SpaceCube 2.0 flight platform. The evaluation spans the development and integration with flight software, remotely uploading the new experiment to the ISS SpaceCube 2.0 platform, and conducting the experiment continuously for 16 days before the platform was decommissioned. The experiment was conducted on two PowerPCs embedded within the Virtex5 FPGA devices and the experiment collected 19,400 checkpoints, processed 253,482 status messages, and incurred 0 faults. These results are highly encouraging and future work is looking into longer duration testing as part of the STP-H5 flight experiment.
Eguílaz, Marcos; Villalonga, Reynaldo; Yáñez-Sedeño, Paloma; Pingarrón, José M
2011-10-15
The design of a novel biosensing electrode surface, combining the advantages of magnetic ferrite nanoparticles (MNPs) functionalized with glutaraldehyde (GA) and poly(diallyldimethylammonium chloride) (PDDA)-coated multiwalled carbon nanotubes (MWCNTs) as platforms for the construction of high-performance multienzyme biosensors, is reported in this work. Before the immobilization of enzymes, GA-MNP/PDDA/MWCNT composites were prepared by wrapping of carboxylated MWCNTs with positively charged PDDA and interaction with GA-functionalized MNPs. The nanoconjugates were characterized by scanning electron microscopy (SEM) and electrochemistry. The electrode platform was used to construct a bienzyme biosensor for the determination of cholesterol, which implied coimmobilization of cholesterol oxidase (ChOx) and peroxidase (HRP) and the use of hydroquinone as redox mediator. Optimization of all variables involved in the preparation and analytical performance of the bienzyme electrode was accomplished. At an applied potential of -0.05 V, a linear calibration graph for cholesterol was obtained in the 0.01-0.95 mM concentration range. The detection limit (0.85 μM), the apparent Michaelis-Menten constant (1.57 mM), the stability of the biosensor, and the calculated activation energy can be advantageously compared with the analytical characteristics of other CNT-based cholesterol biosensors reported in the literature. Analysis of human serum spiked with cholesterol at different concentration levels yielded recoveries between 100% and 103% © 2011 American Chemical Society
Chakrabortty, S; Sen, M; Pal, P
2014-03-01
A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.
Krawczyk, Adalbert; Hintze, Christian; Ackermann, Jessica; Goitowski, Birgit; Trippler, Martin; Grüner, Nico; Neumann-Fraune, Maria; Verheyen, Jens; Fiedler, Melanie
2014-01-01
The fully automated and closed LIAISON(®)XL platform was developed for reliable detection of infection markers like hepatitis B virus (HBV) surface antigen (HBsAg), hepatitis C virus (HCV) antibodies (Ab) or human immunodeficiency virus (HIV)-Ag/Ab. To date, less is known about the diagnostic performance of this system in direct comparison to the common Abbott ARCHITECT(®) platform. We compared the diagnostic performance and usability of the DiaSorin LIAISON(®)XL with the commonly used Abbott ARCHITECT(®) system. The qualitative performance of the above mentioned assays was compared in about 500 sera. Quantitative tests were performed for HBsAg-positive samples from patients under therapy (n=289) and in vitro expressed mutants (n=37). For HCV-Ab, a total number of 155 selected samples from patients chronically infected with different HCV genotypes were tested. The concordance between both systems was 99.4% for HBsAg, 98.81% for HCV-Ab, and 99.6% for HIV-Ab/Ag. The quantitative LIAISON(®)XL murex HBsAg assay detected all mutants in comparable amounts to the HBsAg wild type and yielded highly reliable HBsAg kinetics in patients treated with antiviral drugs. Dilution experiments using the 2nd International Standard for HBsAg (WHO) showed a high accuracy of this test. HCV-Ab from patients infected with genotypes 1-3 were equally detected in both systems. Interestingly, S/CO levels of HCV-Ab from patients infected with genotype 3 seem to be relatively low using both systems. The LIAISON(®)XL platform proved to be an excellent system for diagnostics of HBV, HCV, and HIV with equal performance compared to the ARCHITECT(®) system. Copyright © 2013 Elsevier B.V. All rights reserved.
Rapid Prototyping of High Performance Signal Processing Applications
2011-01-01
understand- ing broadband wireless networking . Prentice Hall, 2007. [4] J.W.M. Baars, L.R. D’Addario, and A.R. Thompson. Radio astronomy in the... wireless sensor net- works. In Proceedings of the IEEE Real-Time Systems Symposium, pages 214–223, Tucson, Arizona, December 2007. 147 [74] C. Shen, H. Wu...computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Ryan T.; Wang, Chenchen; Rausch, Sarah J.
2014-07-01
A hybrid microchip/capillary CE system was developed to allow unbiased and lossless sample loading and high throughput repeated injections. This new hybrid CE system consists of a polydimethylsiloxane (PDMS) microchip sample injector featuring a pneumatic microvalve that separates a sample introduction channel from a short sample loading channel and a fused silica capillary separation column that connects seamlessly to the sample loading channel. The sample introduction channel is pressurized such that when the pneumatic microvalve opens briefly, a variable-volume sample plug is introduced into the loading channel. A high voltage for CE separation is continuously applied across the loading channelmore » and the fused silica capillary separation column. Analytes are rapidly separated in the fused silica capillary with high resolution. High sensitivity MS detection after CE separation is accomplished via a sheathless CE/ESI-MS interface. The performance evaluation of the complete CE/ESI-MS platform demonstrated that reproducible sample injection with well controlled sample plug volumes could be achieved by using the PDMS microchip injector. The absence of band broadening from microchip to capillary indicated a minimum dead volume at the junction. The capabilities of the new CE/ESI-MS platform in performing high throughput and quantitative sample analyses were demonstrated by the repeated sample injection without interrupting an ongoing separation and a good linear dependence of the total analyte ion abundance on the sample plug volume using a mixture of peptide standards. The separation efficiency of the new platform was also evaluated systematically at different sample injection times, flow rates and CE separation voltages.« less
Overall design of imaging spectrometer on-board light aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhongqi, H.; Zhengkui, C.; Changhua, C.
1996-11-01
Aerial remote sensing is the earliest remote sensing technical system and has gotten rapid development in recent years. The development of aerial remote sensing was dominated by high to medium altitude platform in the past, and now it is characterized by the diversity platform including planes of high-medium-low flying altitude, helicopter, airship, remotely controlled airplane, glider, and balloon. The widely used and rapidly developed platform recently is light aircraft. Early in the close of 1970s, Beijing Research Institute of Uranium Geology began aerial photography and geophysical survey using light aircraft, and put forward the overall design scheme of light aircraftmore » imaging spectral application system (LAISAS) in 19905. LAISAS is comprised of four subsystem. They are called measuring platform, data acquiring subsystem, ground testing and data processing subsystem respectively. The principal instruments of LAISAS include measuring platform controlled by inertia gyroscope, aerial spectrometer with high spectral resolution, imaging spectrometer, 3-channel scanner, 128-channel imaging spectrometer, GPS, illuminance-meter, and devices for atmospheric parameters measuring, ground testing, data correction and processing. LAISAS has the features of integrity from data acquisition to data processing and to application; of stability which guarantees the image quality and is comprised of measuring, ground testing device, and in-door data correction system; of exemplariness of integrated the technology of GIS, GPS, and Image Processing System; of practicality which embodied LAISAS with flexibility and high ratio of performance to cost. So, it can be used in the fields of fundamental research of Remote Sensing and large-scale mapping for resource exploration, environmental monitoring, calamity prediction, and military purpose.« less
The CARIBU EBIS control and synchronization system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickerson, Clayton, E-mail: cdickerson@anl.gov; Peters, Christopher, E-mail: cdickerson@anl.gov
2015-01-09
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. Themore » control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.« less
Impacts of high resolution data on traveler compliance levels in emergency evacuation simulations
Lu, Wei; Han, Lee D.; Liu, Cheng; ...
2016-05-05
In this article, we conducted a comparison study of evacuation assignment based on Traffic Analysis Zones (TAZ) and high resolution LandScan USA Population Cells (LPC) with detailed real world roads network. A platform for evacuation modeling built on high resolution population distribution data and activity-based microscopic traffic simulation was proposed. This platform can be extended to any cities in the world. The results indicated that evacuee compliance behavior affects evacuation efficiency with traditional TAZ assignment, but it did not significantly compromise the performance with high resolution LPC assignment. The TAZ assignment also underestimated the real travel time during evacuation. Thismore » suggests that high data resolution can improve the accuracy of traffic modeling and simulation. The evacuation manager should consider more diverse assignment during emergency evacuation to avoid congestions.« less
Collegial Activity Learning between Heterogeneous Sensors.
Feuz, Kyle D; Cook, Diane J
2017-11-01
Activity recognition algorithms have matured and become more ubiquitous in recent years. However, these algorithms are typically customized for a particular sensor platform. In this paper we introduce PECO, a Personalized activity ECOsystem, that transfers learned activity information seamlessly between sensor platforms in real time so that any available sensor can continue to track activities without requiring its own extensive labeled training data. We introduce a multi-view transfer learning algorithm that facilitates this information handoff between sensor platforms and provide theoretical performance bounds for the algorithm. In addition, we empirically evaluate PECO using datasets that utilize heterogeneous sensor platforms to perform activity recognition. These results indicate that not only can activity recognition algorithms transfer important information to new sensor platforms, but any number of platforms can work together as colleagues to boost performance.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
An optimized protocol for generation and analysis of Ion Proton sequencing reads for RNA-Seq.
Yuan, Yongxian; Xu, Huaiqian; Leung, Ross Ka-Kit
2016-05-26
Previous studies compared running cost, time and other performance measures of popular sequencing platforms. However, comprehensive assessment of library construction and analysis protocols for Proton sequencing platform remains unexplored. Unlike Illumina sequencing platforms, Proton reads are heterogeneous in length and quality. When sequencing data from different platforms are combined, this can result in reads with various read length. Whether the performance of the commonly used software for handling such kind of data is satisfactory is unknown. By using universal human reference RNA as the initial material, RNaseIII and chemical fragmentation methods in library construction showed similar result in gene and junction discovery number and expression level estimated accuracy. In contrast, sequencing quality, read length and the choice of software affected mapping rate to a much larger extent. Unspliced aligner TMAP attained the highest mapping rate (97.27 % to genome, 86.46 % to transcriptome), though 47.83 % of mapped reads were clipped. Long reads could paradoxically reduce mapping in junctions. With reference annotation guide, the mapping rate of TopHat2 significantly increased from 75.79 to 92.09 %, especially for long (>150 bp) reads. Sailfish, a k-mer based gene expression quantifier attained highly consistent results with that of TaqMan array and highest sensitivity. We provided for the first time, the reference statistics of library preparation methods, gene detection and quantification and junction discovery for RNA-Seq by the Ion Proton platform. Chemical fragmentation performed equally well with the enzyme-based one. The optimal Ion Proton sequencing options and analysis software have been evaluated.
NASA Astrophysics Data System (ADS)
Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.
2016-03-01
Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.
The Ettention software package.
Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp
2016-02-01
We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.
How to Build a Hybrid Neurofeedback Platform Combining EEG and fMRI
Mano, Marsel; Lécuyer, Anatole; Bannier, Elise; Perronnet, Lorraine; Noorzadeh, Saman; Barillot, Christian
2017-01-01
Multimodal neurofeedback estimates brain activity using information acquired with more than one neurosignal measurement technology. In this paper we describe how to set up and use a hybrid platform based on simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), then we illustrate how to use it for conducting bimodal neurofeedback experiments. The paper is intended for those willing to build a multimodal neurofeedback system, to guide them through the different steps of the design, setup, and experimental applications, and help them choose a suitable hardware and software configuration. Furthermore, it reports practical information from bimodal neurofeedback experiments conducted in our lab. The platform presented here has a modular parallel processing architecture that promotes real-time signal processing performance and simple future addition and/or replacement of processing modules. Various unimodal and bimodal neurofeedback experiments conducted in our lab showed high performance and accuracy. Currently, the platform is able to provide neurofeedback based on electroencephalography and functional magnetic resonance imaging, but the architecture and the working principles described here are valid for any other combination of two or more real-time brain activity measurement technologies. PMID:28377691
Diversity Performance Analysis on Multiple HAP Networks.
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-06-30
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.
Tissue cell assisted fabrication of tubular catalytic platinum microengines
NASA Astrophysics Data System (ADS)
Wang, Hong; Moo, James Guo Sheng; Pumera, Martin
2014-09-01
We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks.We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks. Electronic supplementary information (ESI) available: Related video. See DOI: 10.1039/c4nr03720k
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
Progress in standoff surface contaminant detector platform
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Giblin, Jay; Dixon, John; Hensley, Joel; Mansur, David; Marinelli, William J.
2017-05-01
Progress towards the development of a longwave infrared quantum cascade laser (QLC) based standoff surface contaminant detection platform is presented. The detection platform utilizes reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. The platform employs an ensemble of broadband QCLs with a spectrally selective detector to interrogate target surfaces at 10s of m standoff. A version of the Adaptive Cosine Estimator (ACE) featuring class based screening is used for detection and discrimination in high clutter environments. Detection limits approaching 0.1 μg/cm2 are projected through speckle reduction methods enabling detector noise limited performance. The design, build, and validation of a breadboard version of the QCL-based surface contaminant detector are discussed. Functional test results specific to the QCL illuminator are presented with specific emphasis on speckle reduction.
Yuan, Huiming; Zhou, Yuan; Zhang, Lihua; Liang, Zhen; Zhang, Yukui
2009-10-30
An integrated platform with the combination of proteins and peptides separation was established via the unit of on-line proteins digestion, by which proteins were in sequence separated by column switch recycling size exclusion chromatography (csrSEC), on-line digested by an immobilized trypsin microreactor, trapped and desalted by two parallel C8 precolumns, separated by microRPLC with the linear gradient of organic modifier concentration, and identified by ESI-MS/MS. A 6-protein mixture, with Mr ranging from 10 kDa to 80 kDa, was used to evaluate the performance of the integrated platform, and all proteins were identified with sequence coverage over 5.67%. Our experimental results demonstrate that such an integrated platform is of advantages such as good time compatibility, high peak capacity, and facile automation, which might be a promising approach for proteome study.
Wireless Sensor Network-Based Service Provisioning by a Brokering Platform
Guijarro, Luis; Pla, Vicent; Vidal, Jose R.; Naldi, Maurizio; Mahmoodi, Toktam
2017-01-01
This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility. PMID:28498347
Wireless Sensor Network-Based Service Provisioning by a Brokering Platform.
Guijarro, Luis; Pla, Vicent; Vidal, Jose R; Naldi, Maurizio; Mahmoodi, Toktam
2017-05-12
This paper proposes a business model for providing services based on the Internet of Things through a platform that intermediates between human users and Wireless Sensor Networks (WSNs). The platform seeks to maximize its profit through posting both the price charged to each user and the price paid to each WSN. A complete analysis of the profit maximization problem is performed in this paper. We show that the service provider maximizes its profit by incentivizing all users and all Wireless Sensor Infrastructure Providers (WSIPs) to join the platform. This is true not only when the number of users is high, but also when it is moderate, provided that the costs that the users bear do not trespass a cost ceiling. This cost ceiling depends on the number of WSIPs, on the value of the intrinsic value of the service and on the externality that the WSIP has on the user utility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
A Flexible Annular-Array Imaging Platform for Micro-Ultrasound
Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei
2013-01-01
Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923
Huang, Shu-Hong; Chang, Yu-Shin; Juang, Jyh-Ming Jimmy; Chang, Kai-Wei; Tsai, Mong-Hsun; Lu, Tzu-Pin; Lai, Liang-Chuan; Chuang, Eric Y; Huang, Nien-Tsu
2018-03-12
In this study, we developed an automated microfluidic DNA microarray (AMDM) platform for point mutation detection of genetic variants in inherited arrhythmic diseases. The platform allows for automated and programmable reagent sequencing under precise conditions of hybridization flow and temperature control. It is composed of a commercial microfluidic control system, a microfluidic microarray device, and a temperature control unit. The automated and rapid hybridization process can be performed in the AMDM platform using Cy3 labeled oligonucleotide exons of SCN5A genetic DNA, which produces proteins associated with sodium channels abundant in the heart (cardiac) muscle cells. We then introduce a graphene oxide (GO)-assisted DNA microarray hybridization protocol to enable point mutation detection. In this protocol, a GO solution is added after the staining step to quench dyes bound to single-stranded DNA or non-perfectly matched DNA, which can improve point mutation specificity. As proof-of-concept we extracted the wild-type and mutant of exon 12 and exon 17 of SCN5A genetic DNA from patients with long QT syndrome or Brugada syndrome by touchdown PCR and performed a successful point mutation discrimination in the AMDM platform. Overall, the AMDM platform can greatly reduce laborious and time-consuming hybridization steps and prevent potential contamination. Furthermore, by introducing the reciprocating flow into the microchannel during the hybridization process, the total assay time can be reduced to 3 hours, which is 6 times faster than the conventional DNA microarray. Given the automatic assay operation, shorter assay time, and high point mutation discrimination, we believe that the AMDM platform has potential for low-cost, rapid and sensitive genetic testing in a simple and user-friendly manner, which may benefit gene screening in medical practice.
Home page | prc.gatech.edu | Georgia Institute of Technology | Atlanta, GA
Interconnections & Assembly Low Cost Glass Interposers & Packages MEMS and Sensors GRA Opportunities addressing electrical, mechanical and thermal barriers. Low-cost Glass Interposer and Package Panel-based ultra-thin glass as a high performance, high I/O density, and low cost platform. Interconnections and
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.
Simonyan, Vahan; Mazumder, Raja
2014-09-30
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis
Simonyan, Vahan; Mazumder, Raja
2014-01-01
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953
NASA Astrophysics Data System (ADS)
Aghazadeh, Mustafa; Karimzadeh, Isa
2017-10-01
We provide a novel electrodeposition platform of undoped and Eu3+ doped iron oxide nanoparticles (Eu-IONPs) from an additive-free electrolyte containing Fe(NO3)3, FeCl2 and EuCl3. The prepared IONPs were analyzed using x-ray diffraction, field emission electron microscopy and energy-dispersive x-ray techniques, and the obtained data showed successful electrosynthesis of magnetite nanoparticles (size ≈ 10 nm) doped with about 10 wt% Eu3+ ions. The Eu-IONPs were used as supercapacitor electrode materials, and characterized by cyclic voltammetry and galvanostatic charge-discharge measurements. The as-synthesized Eu-IONPs exhibit remarkable pseudocapacitive activities including high specific capacitances of 212.5 and 153.2 F g-1 at 0.5 and 2 A g-1, respectively, and excellent cycling stabilities of 93.9% and 86.5% after 2000 discharging cycles. Furthermore, vibrational sample magnetometer data confirmed better superparamagnetic performance of Eu-IONPs (Ms = 72.8 emu g-1, Mr = 0.24 emu g-1 and H Ci = 3.48 G) as compared with pure IONPs (Ms = 51.92 emu g-1, Mr = 0.95 emu g-1 and H Ci = 14.62 G) due to exhibiting lower Mr and H Ci values. This novel synthetic platform of metal ion doped iron oxide is potentially a convenient way to fabricate high-performance iron oxide electrodes for energy storage systems.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
CBESW: sequence alignment on the Playstation 3.
Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil
2008-09-17
The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. For large datasets, our implementation on the PlayStation 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. The results from our experiments demonstrate that the PlayStation 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications.
CBESW: Sequence Alignment on the Playstation 3
Wirawan, Adrianto; Kwoh, Chee Keong; Hieu, Nim Tri; Schmidt, Bertil
2008-01-01
Background The exponential growth of available biological data has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing exponentially as well. The recent emergence of accelerator technologies has made it possible to achieve an excellent improvement in execution time for many bioinformatics applications, compared to current general-purpose platforms. In this paper, we demonstrate how the PlayStation® 3, powered by the Cell Broadband Engine, can be used as a computational platform to accelerate the Smith-Waterman algorithm. Results For large datasets, our implementation on the PlayStation® 3 provides a significant improvement in running time compared to other implementations such as SSEARCH, Striped Smith-Waterman and CUDA. Our implementation achieves a peak performance of up to 3,646 MCUPS. Conclusion The results from our experiments demonstrate that the PlayStation® 3 console can be used as an efficient low cost computational platform for high performance sequence alignment applications. PMID:18798993
NASA Astrophysics Data System (ADS)
Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary
2015-10-01
Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.
High strain FBG sensors for structural fatigue testing of military aircraft
NASA Astrophysics Data System (ADS)
Tejedor, S.; Kopczyk, J.; Nuyens, T.; Davis, C.
2012-02-01
This paper reports on a series of tests investigating the performance of Draw Tower Gratings (DTGs) combined with custom-designed broad area packaging and bonding techniques for high-strain sensing applications on Defence platforms. The sensors and packaging were subjected to a series of high-strain static and cyclic loading tests and a summary of these results is presented.
Haraksingh, Rajini R; Abyzov, Alexej; Urban, Alexander Eckehart
2017-04-24
High-resolution microarray technology is routinely used in basic research and clinical practice to efficiently detect copy number variants (CNVs) across the entire human genome. A new generation of arrays combining high probe densities with optimized designs will comprise essential tools for genome analysis in the coming years. We systematically compared the genome-wide CNV detection power of all 17 available array designs from the Affymetrix, Agilent, and Illumina platforms by hybridizing the well-characterized genome of 1000 Genomes Project subject NA12878 to all arrays, and performing data analysis using both manufacturer-recommended and platform-independent software. We benchmarked the resulting CNV call sets from each array using a gold standard set of CNVs for this genome derived from 1000 Genomes Project whole genome sequencing data. The arrays tested comprise both SNP and aCGH platforms with varying designs and contain between ~0.5 to ~4.6 million probes. Across the arrays CNV detection varied widely in number of CNV calls (4-489), CNV size range (~40 bp to ~8 Mbp), and percentage of non-validated CNVs (0-86%). We discovered strikingly strong effects of specific array design principles on performance. For example, some SNP array designs with the largest numbers of probes and extensive exonic coverage produced a considerable number of CNV calls that could not be validated, compared to designs with probe numbers that are sometimes an order of magnitude smaller. This effect was only partially ameliorated using different analysis software and optimizing data analysis parameters. High-resolution microarrays will continue to be used as reliable, cost- and time-efficient tools for CNV analysis. However, different applications tolerate different limitations in CNV detection. Our study quantified how these arrays differ in total number and size range of detected CNVs as well as sensitivity, and determined how each array balances these attributes. This analysis will inform appropriate array selection for future CNV studies, and allow better assessment of the CNV-analytical power of both published and ongoing array-based genomics studies. Furthermore, our findings emphasize the importance of concurrent use of multiple analysis algorithms and independent experimental validation in array-based CNV detection studies.
Thiolene and SIFEL-based Microfluidic Platforms for Liquid-Liquid Extraction
Goyal, Sachit; Desai, Amit V.; Lewis, Robert W.; Ranganathan, David R.; Li, Hairong; Zeng, Dexing; Reichert, David E.; Kenis, Paul J.A.
2014-01-01
Microfluidic platforms provide several advantages for liquid-liquid extraction (LLE) processes over conventional methods, for example with respect to lower consumption of solvents and enhanced extraction efficiencies due to the inherent shorter diffusional distances. Here, we report the development of polymer-based parallel-flow microfluidic platforms for LLE. To date, parallel-flow microfluidic platforms have predominantly been made out of silicon or glass due to their compatibility with most organic solvents used for LLE. Fabrication of silicon and glass-based LLE platforms typically requires extensive use of photolithography, plasma or laser-based etching, high temperature (anodic) bonding, and/or wet etching with KOH or HF solutions. In contrast, polymeric microfluidic platforms can be fabricated using less involved processes, typically photolithography in combination with replica molding, hot embossing, and/or bonding at much lower temperatures. Here we report the fabrication and testing of microfluidic LLE platforms comprised of thiolene or a perfluoropolyether-based material, SIFEL, where the choice of materials was mainly guided by the need for solvent compatibility and fabrication amenability. Suitable designs for polymer-based LLE platforms that maximize extraction efficiencies within the constraints of the fabrication methods and feasible operational conditions were obtained using analytical modeling. To optimize the performance of the polymer-based LLE platforms, we systematically studied the effect of surface functionalization and of microstructures on the stability of the liquid-liquid interface and on the ability to separate the phases. As demonstrative examples, we report (i) a thiolene-based platform to determine the lipophilicity of caffeine, and (ii) a SIFEL-based platform to extract radioactive copper from an acidic aqueous solution. PMID:25246730
Design Strategy for a Formally Verified Reliable Computing Platform
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.
1991-01-01
This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.
Task Assignment Heuristics for Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)
2001-01-01
CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.
Development of Double-Pulsed Two-Micron Laser for Atmospheric Carbon Dioxide Measurements
NASA Technical Reports Server (NTRS)
Petros, Mulugeta; Singh, Upendra N.; Yu, Jirong; Refaat, Tamer F.
2017-01-01
A CO2 lidar double-pulse two-micron high-energy transmitter, tuned to on- and off-line absorption wavelengths, has been developed. Transmitter operation and performance has been verified on ground and airborne platform.
Helicopter Flight Simulation Motion Platform Requirements
NASA Technical Reports Server (NTRS)
Schroeder, Jeffery Allyn
1999-01-01
To determine motion fidelity requirements, a series of piloted simulations was performed. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositioning. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
2015-01-01
A hybrid microchip/capillary electrophoresis (CE) system was developed to allow unbiased and lossless sample loading and high-throughput repeated injections. This new hybrid CE system consists of a poly(dimethylsiloxane) (PDMS) microchip sample injector featuring a pneumatic microvalve that separates a sample introduction channel from a short sample loading channel, and a fused-silica capillary separation column that connects seamlessly to the sample loading channel. The sample introduction channel is pressurized such that when the pneumatic microvalve opens briefly, a variable-volume sample plug is introduced into the loading channel. A high voltage for CE separation is continuously applied across the loading channel and the fused-silica capillary separation column. Analytes are rapidly separated in the fused-silica capillary, and following separation, high-sensitivity MS detection is accomplished via a sheathless CE/ESI-MS interface. The performance evaluation of the complete CE/ESI-MS platform demonstrated that reproducible sample injection with well controlled sample plug volumes could be achieved by using the PDMS microchip injector. The absence of band broadening from microchip to capillary indicated a minimum dead volume at the junction. The capabilities of the new CE/ESI-MS platform in performing high-throughput and quantitative sample analyses were demonstrated by the repeated sample injection without interrupting an ongoing separation and a linear dependence of the total analyte ion abundance on the sample plug volume using a mixture of peptide standards. The separation efficiency of the new platform was also evaluated systematically at different sample injection times, flow rates, and CE separation voltages. PMID:24865952
The AstroSat Production Line: From AstroSat 100 to AstroSat 1000
NASA Astrophysics Data System (ADS)
Maliet, E.; Pawlak, D.; Koeck, C.; Beaufumé, E.
2008-08-01
From the late 90s onward, Astrium Satellites has developed and improved several classes of high resolution optical Earth Observation satellites. The resulting product line ranges from micro-satellites (about 120 kg) type to the large satellites (in the range of 1 200 kg). They all make uses of state of the art technologies for optical payloads, as well as for avionics. Several classes of platforms have thus been defined and standardised: AstroSat 100 for satellites up to 150 kg, allowing affordable but fully operational missions, AstroSat 500 for satellites up to 800 kg, allowing complex high resolution missions, and AstroSat 1000 for satellites up to 1 200 kg, providing very high resolution and outstanding imaging and agility capabilities. A new class, AstroSat 250, has been developed by Astrium Satellites, and is now proposed, offering a state-of-the-art 3-axis agile platform for high- resolution missions, with a launch mass below 550 kg. The Astrosat platforms rely on a centralised architecture avionics based on an innovative AOCS hybridising of measurements from GPS, stellar sensors and inertial reference unit. Operational safety has been emphasised through thruster free safe modes. All optical payloads make use of all Silicon Carbide (SiC) telescopes. High performance and low consumption linear CCD arrays provide state of the art images. The satellites are designed for simple flight operations, large data collection capability, and large versatility of payload and missions. They are adaptable to a large range of performances. Astrium satellites have already been selected by various customers worldwide.
LOLA: a 40.000 km optical link between an aircraft and a geostationary satellite
NASA Astrophysics Data System (ADS)
Cazaubiel, Vincent; Planche, Gilles; Chorvalli, Vincent; Le Hors, Lénaïc.; Roy, Bernard; Giraud, Emmanuel; Vaillon, Ludovic; Carre, Francois; Decourbey, Eric
2017-11-01
The LOLA program aims at characterising a 40.000 km optical link through the atmosphere between a high altitude aircraft and a geostationary platform. It opens a new area in the field of optical communications with moving platforms. A complete new optical terminal has been designed and manufactured for this program. The optical terminal architecture includes a specific pointing subsystem to acquire and stabilize the line of sight despite the induced vibrations from the aircraft and the moving pattern from the received laser signal. The optical configuration features a silicon carbide telescope and optical bench to ensure a high thermoelastic angular stability between receive and transmit beams. The communications subsystem includes fibered laser diodes developed in Europe and high performance avalanche photo detectors. Specific encoding patterns are used to maintain the performance of the link despite potential strong fading of the signal. A specific optical link model through the atmosphere has been developed and has been validated thanks to the optical link measurements performed between ARTEMIS and the Optical Ground Station located in the Canarian islands. This model will be used during the flight tests campaign that is to start this summer.
Talboom-Kamp, Esther Pwa; Verdijk, Noortje A; Kasteleyn, Marise J; Harmans, Lara M; Talboom, Irvin Jsh; Numans, Mattijs E; Chavannes, Niels H
2017-05-31
Worldwide, nearly 3 million people die of chronic obstructive pulmonary disease (COPD) every year. Integrated disease management (IDM) improves disease-specific quality of life and exercise capacity for people with COPD, but can also reduce hospital admissions and hospital days. Self-management of COPD through eHealth interventions has shown to be an effective method to improve the quality and efficiency of IDM in several settings, but it remains unknown which factors influence usage of eHealth and change in behavior of patients. Our study, e-Vita COPD, compares different levels of integration of Web-based self-management platforms in IDM in three primary care settings. The main aim of this study is to analyze the factors that successfully promote the use of a self-management platform for COPD patients. The e-Vita COPD study compares three different approaches to incorporating eHealth via Web-based self-management platforms into IDM of COPD using a parallel cohort design. Three groups integrated the platforms to different levels. In groups 1 (high integration) and 2 (medium integration), randomization was performed to two levels of personal assistance for patients (high and low assistance); in group 3 there was no integration into disease management (none integration). Every visit to the e-Vita and Zorgdraad COPD Web platforms was tracked objectively by collecting log data (sessions and services). At the first log-in, patients completed a baseline questionnaire. Baseline characteristics were automatically extracted from the log files including age, gender, education level, scores on the Clinical COPD Questionnaire (CCQ), dyspnea scale (MRC), and quality of life questionnaire (EQ5D). To predict the use of the platforms, multiple linear regression analyses for the different independent variables were performed: integration in IDM (high, medium, none), personal assistance for the participants (high vs low), educational level, and self-efficacy level (General Self-Efficacy Scale [GSES]). All analyses were adjusted for age and gender. Of the 702 invited COPD patients, 215 (30.6%) registered to a platform. Of the 82 patients in group 1 (high integration IDM), 36 were in group 1A (personal assistance) and 46 in group 1B (low assistance). Of the 96 patients in group 2 (medium integration IDM), 44 were in group 2A (telephone assistance) and 52 in group 2B (low assistance). A total of 37 patients participated in group 3 (no integration IDM). In all, 107 users (49.8%) visited the platform at least once in the 15-month period. The mean number of sessions differed between the three groups (group 1: mean 10.5, SD 1.3; group 2: mean 8.8, SD 1.4; group 3: mean 3.7, SD 1.8; P=.01). The mean number of sessions differed between the high-assistance and low-assistance groups in groups 1 and 2 (high: mean 11.8, SD 1.3; low: mean 6.7, SD 1.4; F1,80=6.55, P=.01). High-assistance participants used more services (mean 45.4, SD 6.2) than low-assistance participants (mean 21.2, SD 6.8; F1,80=6.82, P=.01). No association was found between educational level and usage and between GSES and usage. Use of a self-management platform is higher when participants receive adequate personal assistance about how to use the platform. Blended care, where digital health and usual care are integrated, will likely lead to increased use of the online program. Future research should provide additional insights into the preferences of different patient groups. Nederlands Trial Register NTR4098; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4098 (Archived by WebCite at http://www.webcitation.org/6qO1hqiJ1). ©Esther PWA Talboom-Kamp, Noortje A Verdijk, Marise J Kasteleyn, Lara M Harmans, Irvin JSH Talboom, Mattijs E Numans, Niels H Chavannes. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.05.2017.
Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations
2017-05-08
NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved
Boron dipyrromethene (BODIPY) functionalized carbon nano-onions for high resolution cellular imaging
NASA Astrophysics Data System (ADS)
Bartelmess, Juergen; de Luca, Elisa; Signorelli, Angelo; Baldrighi, Michele; Becce, Michele; Brescia, Rosaria; Nardone, Valentina; Parisini, Emilio; Echegoyen, Luis; Pompa, Pier Paolo; Giordani, Silvia
2014-10-01
Carbon nano-onions (CNOs) are an exciting class of carbon nanomaterials, which have recently demonstrated a facile cell-penetration capability. In the present work, highly fluorescent boron dipyrromethene (BODIPY) dyes were covalently attached to the surface of CNOs. The introduction of this new carbon nanomaterial-based imaging platform, made of CNOs and BODIPY fluorophores, allows for the exploration of synergetic effects between the two building blocks and for the elucidation of its performance in biological applications. The high fluorescence intensity exhibited by the functionalized CNOs translates into an excellent in vitro probe for the high resolution imaging of MCF-7 human breast cancer cells. It was also found that the CNOs, internalized by the cells by endocytosis, localized in the lysosomes and did not show any cytotoxic effects. The presented results highlight CNOs as excellent platforms for biological and biomedical studies due to their low toxicity, efficient cellular uptake and low fluorescence quenching of attached probes.Carbon nano-onions (CNOs) are an exciting class of carbon nanomaterials, which have recently demonstrated a facile cell-penetration capability. In the present work, highly fluorescent boron dipyrromethene (BODIPY) dyes were covalently attached to the surface of CNOs. The introduction of this new carbon nanomaterial-based imaging platform, made of CNOs and BODIPY fluorophores, allows for the exploration of synergetic effects between the two building blocks and for the elucidation of its performance in biological applications. The high fluorescence intensity exhibited by the functionalized CNOs translates into an excellent in vitro probe for the high resolution imaging of MCF-7 human breast cancer cells. It was also found that the CNOs, internalized by the cells by endocytosis, localized in the lysosomes and did not show any cytotoxic effects. The presented results highlight CNOs as excellent platforms for biological and biomedical studies due to their low toxicity, efficient cellular uptake and low fluorescence quenching of attached probes. Electronic supplementary information (ESI) available: Additional experimental and crystallographic data, additional confocal microscopy and HR-TEM images and illustrations, EELS, TGA, DLS and Z-potential results. Movie M1. See DOI: 10.1039/c4nr04533e
Novel remote sensor systems: design, prototyping, and characterization
NASA Astrophysics Data System (ADS)
Kayastha, V.; Gibbons, S.; Lamb, J. E.; Giedd, R. E.
2014-06-01
We have designed and tested a prototype TRL4 radio-frequency (RF) sensing platform containing a transceiver that interrogates a passive carbon nanotube (CNT)-based sensor platform. The transceiver can be interfaced to a server technology such as a Bluetooth® or Wi-Fi device for further connectivity. The novelty of a very-low-frequency (VLF) implementation in the transceiver design will ultimately enable deep penetration into the ground or metal structures to communicate with buried sensing platforms. The sensor platform generally consists of printed electronic devices made of CNTs on flexible poly(ethylene terephthalate) (PET) and Kapton® substrates. This novel remote sensing system can be integrated with both passive and active sensing platforms. It offers unique characteristics suitable for a variety of sensing applications. The proposed sensing platforms can take on different form factors and the RF output of the sensing platforms could be modulated by humidity, temperature, pressure, strain, or vibration signals. Resonant structures were designed and constructed to operate in the very-high-frequency (VHF) and VLF ranges. In this presentation, we will report results of our continued effort to develop a commercially viable transceiver capable of interrogating the conformally mounted sensing platforms made from CNTs or silver-based nanomaterials on polyimide substrates over a broad range of frequencies. The overall performance of the sensing system with different sensing elements and at different frequency ranges will be discussed.
Yarkoni, Tal
2012-01-01
Traditional pre-publication peer review of scientific output is a slow, inefficient, and unreliable process. Efforts to replace or supplement traditional evaluation models with open evaluation platforms that leverage advances in information technology are slowly gaining traction, but remain in the early stages of design and implementation. Here I discuss a number of considerations relevant to the development of such platforms. I focus particular attention on three core elements that next-generation evaluation platforms should strive to emphasize, including (1) open and transparent access to accumulated evaluation data, (2) personalized and highly customizable performance metrics, and (3) appropriate short-term incentivization of the userbase. Because all of these elements have already been successfully implemented on a large scale in hundreds of existing social web applications, I argue that development of new scientific evaluation platforms should proceed largely by adapting existing techniques rather than engineering entirely new evaluation mechanisms. Successful implementation of open evaluation platforms has the potential to substantially advance both the pace and the quality of scientific publication and evaluation, and the scientific community has a vested interest in shifting toward such models as soon as possible. PMID:23060783
An Application Development Platform for Neuromorphic Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, Mark; Chan, Jason; Daffron, Christopher
2016-01-01
Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.
NASA Technical Reports Server (NTRS)
Ollendorf, S.; Fowle, A.; Almgren, D.
1981-01-01
A system utilizing a pumped, two-phase single component working fluid for heat exchange and transport services necessary to meet the temperature control requirements of typical orbiting instrument payloads on space platforms is described. The design characteristics of the system is presented, together with a presentation of a laboratory apparatus for demonstration of proof of concept. Results indicate that the pumped two-phase design concept can meet a wide range of thermal performance requirements with the only penalty being the requirement for a small liquid pump.
A Multicenter Study To Evaluate the Performance of High-Throughput Sequencing for Virus Detection
Ng, Siemon H. S.; Vandeputte, Olivier; Aljanahi, Aisha; Deyati, Avisek; Cassart, Jean-Pol; Charlebois, Robert L.; Taliaferro, Lanyn P.
2017-01-01
ABSTRACT The capability of high-throughput sequencing (HTS) for detection of known and unknown viruses makes it a powerful tool for broad microbial investigations, such as evaluation of novel cell substrates that may be used for the development of new biological products. However, like any new assay, regulatory applications of HTS need method standardization. Therefore, our three laboratories initiated a study to evaluate performance of HTS for potential detection of viral adventitious agents by spiking model viruses in different cellular matrices to mimic putative materials for manufacturing of biologics. Four model viruses were selected based upon different physical and biochemical properties and commercial availability: human respiratory syncytial virus (RSV), Epstein-Barr virus (EBV), feline leukemia virus (FeLV), and human reovirus (REO). Additionally, porcine circovirus (PCV) was tested by one laboratory. Independent samples were prepared for HTS by spiking intact viruses or extracted viral nucleic acids, singly or mixed, into different HeLa cell matrices (resuspended whole cells, cell lysate, or total cellular RNA). Data were obtained using different sequencing platforms (Roche 454, Illumina HiSeq1500 or HiSeq2500). Bioinformatic analyses were performed independently by each laboratory using available tools, pipelines, and databases. The results showed that comparable virus detection was obtained in the three laboratories regardless of sample processing, library preparation, sequencing platform, and bioinformatic analysis: between 0.1 and 3 viral genome copies per cell were detected for all of the model viruses used. This study highlights the potential for using HTS for sensitive detection of adventitious viruses in complex biological samples containing cellular background. IMPORTANCE Recent high-throughput sequencing (HTS) investigations have resulted in unexpected discoveries of known and novel viruses in a variety of sample types, including research materials, clinical materials, and biological products. Therefore, HTS can be a powerful tool for supplementing current methods for demonstrating the absence of adventitious or unwanted viruses in biological products, particularly when using a new cell line. However, HTS is a complex technology with different platforms, which needs standardization for evaluation of biologics. This collaborative study was undertaken to investigate detection of different virus types using two different HTS platforms. The results of the independently performed studies demonstrated a similar sensitivity of virus detection, regardless of the different sample preparation and processing procedures and bioinformatic analyses done in the three laboratories. Comparable HTS detection of different virus types supports future development of reference virus materials for standardization and validation of different HTS platforms. PMID:28932815
Development of fast wireless detection system for fixed offshore platform
NASA Astrophysics Data System (ADS)
Li, Zhigang; Yu, Yan; Jiao, Dong; Wang, Jie; Li, Zhirui; Ou, Jinping
2011-04-01
Offshore platforms' security is concerned since in 1950s and 1960s, and in the early 1980s some important specifications and standards are built, and all these provide technical basis of fixed platform design, construction, installation and evaluation. With the condition that more and more platforms are in serving over age, the research about the evaluation and detection technology of offshore platform has been a hotspot, especially underwater detection, and assessment method based on the finite element calculation. For fixed platform structure detection, conventional NDT methods, such as eddy current, magnetic powder, permeate, X-ray and ultrasonic, etc, are generally used. These techniques are more mature, intuitive, but underwater detection needs underwater robot, the necessary supporting tools of auxiliary equipment, and trained professional team, thus resources and cost used are considerable, installation time of test equipment is long. This project presents a new kind of fast wireless detection and damage diagnosis system for fixed offshore platform using wireless sensor networks, that is, wireless sensor nodes can be put quickly on the offshore platform, detect offshore platform structure global status by wireless communication, and then make diagnosis. This system is operated simply, suitable for offshore platform integrity states rapid assessment. The designed system consists in intelligence acquisition equipment and 8 wireless collection nodes, the whole system has 64 collection channels, namely every wireless collection node has eight 16-bit accuracy of A/D channels. Wireless collection node, integrated with vibration sensing unit, embedded low-power micro-processing unit, wireless transceiver unit, large-capacity power unit, and GPS time synchronization unit, can finish the functions such as vibration data collection, initial analysis, data storage, data wireless transmission. Intelligence acquisition equipment, integrated with high-performance computation unit, wireless transceiver unit, mobile power unit and embedded data analysis software, can totally control multi-wireless collection nodes, receive and analyze data, parameter identification. Data is transmitted at the 2.4GHz wireless communication channel, every sensing data channel in charge of data transmission is in a stable frequency band, control channel responsible for the control of power parameters is in a public frequency band. The test is initially conducted for the designed system, experimental results show that the system has good application prospects and practical value with fast arrangement, high sampling rate, high resolution, capacity of low frequency detection.
2016-11-10
A heavy-lift crane lifts the second half of the C-level work platforms, C north, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle of the Vehicle Assembly Building (VAB) at NASA's Kennedy Space Center in Florida. The C platform will be moved into High Bay 3 for installation on the north side of High Bay 3. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. In view below Platform C are several of the previously installed platforms. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
NASA Astrophysics Data System (ADS)
Wang, Rui
It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.
Fly's Eye camera system: optical imaging using a hexapod platform
NASA Astrophysics Data System (ADS)
Jaskó, Attila; Pál, András.; Vida, Krisztián.; Mészáros, László; Csépány, Gergely; Mező, György
2014-07-01
The Fly's Eye Project is a high resolution, high coverage time-domain survey in multiple optical passbands: our goal is to cover the entire visible sky above the 30° horizontal altitude with a cadence of ~3 min. Imaging is going to be performed by 19 wide-field cameras mounted on a hexapod platform resembling a fly's eye. Using a hexapod developed and built by our team allows us to create a highly fault-tolerant instrument that uses the sky as a reference to define its own tracking motion. The virtual axis of the platform is automatically aligned with the Earth's rotational axis; therefore the same mechanics can be used independently from the geographical location of the device. Its enclosure makes it capable of autonomous observing and withstanding harsh environmental conditions. We briefly introduce the electrical, mechanical and optical design concepts of the instrument and summarize our early results, focusing on sidereal tracking. Due to the hexapod design and hence the construction is independent from the actual location, it is considerably easier to build, install and operate a network of such devices around the world.
Current Status of a NASA High-Altitude Balloon-Based Observatory for Planetary Science
NASA Technical Reports Server (NTRS)
Varga, Denise M.; Dischner, Zach
2015-01-01
Recent studies have shown that progress can be made on over 20% of the key questions called out in the current Planetary Science Decadal Survey by a high-altitude balloon-borne observatory. Therefore, NASA has been assessing concepts for a gondola-based observatory that would achieve the greatest possible science return in a low-risk and cost-effective manner. This paper addresses results from the 2014 Balloon Observation Platform for Planetary Science (BOPPS) mission, namely successes in the design and performance of the Fine Pointing System. The paper also addresses technical challenges facing the new Gondola for High Altitude Planetary Science (GHAPS) reusable platform, including thermal control for the Optical Telescope Assembly, power generation and management, and weight-saving considerations that the team will be assessing in 2015 and beyond.
Application of tissue mesodissection to molecular cancer diagnostics.
Krizman, David; Adey, Nils; Parry, Robert
2015-02-01
To demonstrate clinical application of a mesodissection platform that was developed to combine advantages of laser-based instrumentation with the speed/ease of manual dissection for automated dissection of tissue off standard glass slides. Genomic analysis for KRAS gene mutation was performed on formalin fixed paraffin embedded (FFPE) cancer patient tissue that was dissected using the mesodissection platform. Selected reaction monitoring proteomic analysis for quantitative Her2 protein expression was performed on FFPE patient tumour tissue dissected by a laser-based instrument and the MilliSect instrument. Genomic analysis demonstrates highly confident detection of KRAS mutation specifically in lung cancer cells and not the surrounding benign, non-tumour tissue. Proteomic analysis demonstrates Her2 quantitative protein expression in breast cancer cells dissected manually, by laser-based instrumentation and by MilliSect instrumentation (mesodissection). Slide-mounted tissue dissection is commonly performed using laser-based instruments or manually scraping tissue by scalpel. Here we demonstrate that the mesodissection platform as performed by the MilliSect instrument for tissue dissection is cost-effective; it functions comparably to laser-based dissection and which can be adopted into a clinical diagnostic workflow. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
High performance, durable polymers including poly(phenylene)
Fujimoto, Cy; Pratt, Harry; Anderson, Travis Mark
2017-02-28
The present invention relates to functionalized polymers including a poly(phenylene) structure. In some embodiments, the polymers and copolymers of the invention include a highly localized concentration of acidic moieties, which facilitate proton transport and conduction through networks formed from these polymers. In addition, the polymers can include functional moieties, such as electron-withdrawing moieties, to protect the polymeric backbone, thereby extending its durability. Such enhanced proton transport and durability can be beneficial for any high performance platform that employs proton exchange polymeric membranes, such as in fuel cells or flow batteries.
NASA Astrophysics Data System (ADS)
Franchetti, Franz; Sandryhaila, Aliaksei; Johnson, Jeremy R.
2014-06-01
In this paper we introduce High Assurance SPIRAL to solve the last mile problem for the synthesis of high assurance implementations of controllers for vehicular systems that are executed in today's and future embedded and high performance embedded system processors. High Assurance SPIRAL is a scalable methodology to translate a high level specification of a high assurance controller into a highly resource-efficient, platform-adapted, verified control software implementation for a given platform in a language like C or C++. High Assurance SPIRAL proves that the implementation is equivalent to the specification written in the control engineer's domain language. Our approach scales to problems involving floating-point calculations and provides highly optimized synthesized code. It is possible to estimate the available headroom to enable assurance/performance trade-offs under real-time constraints, and enables the synthesis of multiple implementation variants to make attacks harder. At the core of High Assurance SPIRAL is the Hybrid Control Operator Language (HCOL) that leverages advanced mathematical constructs expressing the controller specification to provide high quality translation capabilities. Combined with a verified/certified compiler, High Assurance SPIRAL provides a comprehensive complete solution to the efficient synthesis of verifiable high assurance controllers. We demonstrate High Assurance SPIRALs capability by co-synthesizing proofs and implementations for attack detection and sensor spoofing algorithms and deploy the code as ROS nodes on the Landshark unmanned ground vehicle and on a Synthetic Car in a real-time simulator.
A neurorobotic platform for locomotor prosthetic development in rats and mice
NASA Astrophysics Data System (ADS)
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
Objectives. We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. Approach. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Main Results. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. Significance. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
A neurorobotic platform for locomotor prosthetic development in rats and mice.
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
Conception preliminaire de disques de turbine axiale pour moteurs d'aeronefs
NASA Astrophysics Data System (ADS)
Ouellet, Yannick
The preliminary design phase of a turbine rotor has an important impact on the architecture of a new engine definition, as it sets the technical orientation right from start and provides a good estimate of product performance, weight and cost. In addition, the execution speed at this preliminary phase has become critical into capturing business opportunities. Improving upfront accuracy also alleviates downstream detailed design work and therefore reduces overall product development cycle time. This preliminary phase contains elements slowing down its process, including low interoperability of currently used systems, incompatibility of software and ineffective management of data. In order to overcome these barriers, we have developed the first module of a new Design and Analysis (D&A) platform for the rotor disc. This complete platform ensures integration of different tools processing in batch mode, and is driven from a single graphical user interface. The platform developed has been linked with different optimization methods (algorithms, configuration) in order to automate the disc design and propose best practices for rotor structural optimization. This methodology allowed reduction in design cycle time and improvement of performance. It was applied on two reference P&WC axial discs. The platform's architecture was also used in the development of reference charts to better understand disc performance within given design space. Four high pressure rotor discs of P&WC turbofan and turboprop engines were used to generate the technical charts and understand the effect of various parameters. The new tools supporting disc D&A, combined with the optimization process and reference charts, has proven to be profitable in terms of component performance and engineering effort inputs.
The BioMedical Evidence Graph (BMEG) | Informatics Technology for Cancer Research (ITCR)
The BMEG is a Cancer Data integration Platform that utilizes methods collected from DREAM challenges and applied to large datasets, such as the TCGA, and makes them avalible for analysis using a high performance graph database
ERIC Educational Resources Information Center
Blaser, Mark; Larsen, Jamie
1996-01-01
Presents five interactive, computer-based activities that mimic scientific tests used by sport researchers to help companies design high-performance athletic shoes, including impact tests, flexion tests, friction tests, video analysis, and computer modeling. Provides a platform for teachers to build connections between chemistry (polymer science),…
Performance calculation and simulation system of high energy laser weapon
NASA Astrophysics Data System (ADS)
Wang, Pei; Liu, Min; Su, Yu; Zhang, Ke
2014-12-01
High energy laser weapons are ready for some of today's most challenging military applications. Based on the analysis of the main tactical/technical index and combating process of high energy laser weapon, a performance calculation and simulation system of high energy laser weapon was established. Firstly, the index decomposition and workflow of high energy laser weapon was proposed. The entire system was composed of six parts, including classical target, platform of laser weapon, detect sensor, tracking and pointing control, laser atmosphere propagation and damage assessment module. Then, the index calculation modules were designed. Finally, anti-missile interception simulation was performed. The system can provide reference and basis for the analysis and evaluation of high energy laser weapon efficiency.
Ultrahigh-sensitive sensing platform based on p-type dumbbell-like Co3O4 network
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Zhang, Tong; Zhang, Rui; Lou, Zheng; Deng, Jianan; Wang, Lili
2017-12-01
Development of high performance room temperature sensors remains a grand challenge for high demand of practical application. Metal oxide semiconductors (MOSs) have many advantages over others due to their easy functionalization, high surface area, and low cost. However, they typically need a high work temperature during sensing process. Here, p-type sensing layer is reported, consisting of pore-rich dumbbell-like Co3O4 particles (DP-Co3O4) with intrinsic high catalytic activity. The gas sensor (GS) based DP-Co3O4 catalyst exhibits ultrahigh NH3 sensing activity along with excellent stability over other structure based NH3 GSs in room temperature work environment. In addition, the unique structure of DP-Co3O4 with pore-rich and high catalytic activity endows fast gas diffusion rate and high sensitivity at room temperature. Taken together, the findings in this work highlight the merit of integrating highly active materials in p-type materials, offering a framework to develop high-sensitivity room temperature sensing platforms.
Interactive Supercomputing’s Star-P Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelman, Alan; Husbands, Parry; Leibman, Steve
2006-09-19
The thesis of this extended abstract is simple. High productivity comes from high level infrastructures. To measure this, we introduce a methodology that goes beyond the tradition of timing software in serial and tuned parallel modes. We perform a classroom productivity study involving 29 students who have written a homework exercise in a low level language (MPI message passing) and a high level language (Star-P with MATLAB client). Our conclusions indicate what perhaps should be of little surprise: (1) the high level language is always far easier on the students than the low level language. (2) The early versions ofmore » the high level language perform inadequately compared to the tuned low level language, but later versions substantially catch up. Asymptotically, the analogy must hold that message passing is to high level language parallel programming as assembler is to high level environments such as MATLAB, Mathematica, Maple, or even Python. We follow the Kepner method that correctly realizes that traditional speedup numbers without some discussion of the human cost of reaching these numbers can fail to reflect the true human productivity cost of high performance computing. Traditional data compares low level message passing with serial computation. With the benefit of a high level language system in place, in our case Star-P running with MATLAB client, and with the benefit of a large data pool: 29 students, each running the same code ten times on three evolutions of the same platform, we can methodically demonstrate the productivity gains. To date we are not aware of any high level system as extensive and interoperable as Star-P, nor are we aware of an experiment of this kind performed with this volume of data.« less
Li, Zhijun; Munro, Kim; Narouz, Mina R; Lau, Andrew; Hao, Hongxia; Crudden, Cathleen M; Horton, J Hugh
2018-05-30
Sensor surfaces play a predominant role in the development of optical biosensor technologies for the analysis of biomolecular interactions. Thiol-based self-assembled monolayers (SAMs) on gold have been widely used as linker layers for sensor surfaces. However, the degradation of the thiol-gold bond can limit the performance and durability of such surfaces, directly impacting their performance and cost-effectiveness. To this end, a new family of materials based on N-heterocyclic carbenes (NHCs) has emerged as an alternative for surface modification, capable of self-assembling onto a gold surface with higher affinity and superior stability as compared to the thiol-based systems. Here we demonstrate three applications of NHC SAMs supporting a dextran layer as a tunable platform for developing various affinity-capture biosensor surfaces. We describe the development and testing of NHC-based dextran biosensor surfaces modified with each of streptavidin, nitrilotriacetic acid, and recombinant Protein A. These affinity-capture sensor surfaces enable oriented binding of ligands for optimal performance in biomolecular assays. Together, the intrinsic high stability and flexible design of the NHC biosensing platforms show great promise and open up exciting possibilities for future biosensing applications.
Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures
NASA Astrophysics Data System (ADS)
Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.
2016-12-01
The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.
An integrated low phase noise radiation-pressure-driven optomechanical oscillator chipset
Luan, Xingsheng; Huang, Yongjun; Li, Ying; McMillan, James F.; Zheng, Jiangjun; Huang, Shu-Wei; Hsieh, Pin-Chun; Gu, Tingyi; Wang, Di; Hati, Archita; Howe, David A.; Wen, Guangjun; Yu, Mingbin; Lo, Guoqiang; Kwong, Dim-Lee; Wong, Chee Wei
2014-01-01
High-quality frequency references are the cornerstones in position, navigation and timing applications of both scientific and commercial domains. Optomechanical oscillators, with direct coupling to continuous-wave light and non-material-limited f × Q product, are long regarded as a potential platform for frequency reference in radio-frequency-photonic architectures. However, one major challenge is the compatibility with standard CMOS fabrication processes while maintaining optomechanical high quality performance. Here we demonstrate the monolithic integration of photonic crystal optomechanical oscillators and on-chip high speed Ge detectors based on the silicon CMOS platform. With the generation of both high harmonics (up to 59th order) and subharmonics (down to 1/4), our chipset provides multiple frequency tones for applications in both frequency multipliers and dividers. The phase noise is measured down to −125 dBc/Hz at 10 kHz offset at ~400 μW dropped-in powers, one of the lowest noise optomechanical oscillators to date and in room-temperature and atmospheric non-vacuum operating conditions. These characteristics enable optomechanical oscillators as a frequency reference platform for radio-frequency-photonic information processing. PMID:25354711
Lens-free shadow image based high-throughput continuous cell monitoring technique.
Jin, Geonsoo; Yoo, In-Hwa; Pack, Seung Pil; Yang, Ji-Woon; Ha, Un-Hwan; Paek, Se-Hwan; Seo, Sungkyu
2012-01-01
A high-throughput continuous cell monitoring technique which does not require any labeling reagents or destruction of the specimen is demonstrated. More than 6000 human alveolar epithelial A549 cells are monitored for up to 72 h simultaneously and continuously with a single digital image within a cost and space effective lens-free shadow imaging platform. In an experiment performed within a custom built incubator integrated with the lens-free shadow imaging platform, the cell nucleus division process could be successfully characterized by calculating the signal-to-noise ratios (SNRs) and the shadow diameters (SDs) of the cell shadow patterns. The versatile nature of this platform also enabled a single cell viability test followed by live cell counting. This study firstly shows that the lens-free shadow imaging technique can provide a continuous cell monitoring without any staining/labeling reagent and destruction of the specimen. This high-throughput continuous cell monitoring technique based on lens-free shadow imaging may be widely utilized as a compact, low-cost, and high-throughput cell monitoring tool in the fields of drug and food screening or cell proliferation and viability testing. Copyright © 2012 Elsevier B.V. All rights reserved.
Integrated Design and Implementation of Embedded Control Systems with Scilab
Ma, Longhua; Xia, Feng; Peng, Zhe
2008-01-01
Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost. PMID:27873827
Integrated Design and Implementation of Embedded Control Systems with Scilab.
Ma, Longhua; Xia, Feng; Peng, Zhe
2008-09-05
Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly timeconsuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dagher, Habib; Viselli, Anthony; Goupee, Andrew
The primary goal of the basin model test program discussed herein is to properly scale and accurately capture physical data of the rigid body motions, accelerations and loads for different floating wind turbine platform technologies. The intended use for this data is for performing comparisons with predictions from various aero-hydro-servo-elastic floating wind turbine simulators for calibration and validation. Of particular interest is validating the floating offshore wind turbine simulation capabilities of NREL’s FAST open-source simulation tool. Once the validation process is complete, coupled simulators such as FAST can be used with a much greater degree of confidence in design processesmore » for commercial development of floating offshore wind turbines. The test program subsequently described in this report was performed at MARIN (Maritime Research Institute Netherlands) in Wageningen, the Netherlands. The models considered consisted of the horizontal axis, NREL 5 MW Reference Wind Turbine (Jonkman et al., 2009) with a flexible tower affixed atop three distinct platforms: a tension leg platform (TLP), a spar-buoy modeled after the OC3 Hywind (Jonkman, 2010) and a semi-submersible. The three generic platform designs were intended to cover the spectrum of currently investigated concepts, each based on proven floating offshore structure technology. The models were tested under Froude scale wind and wave loads. The high-quality wind environments, unique to these tests, were realized in the offshore basin via a novel wind machine which exhibits negligible swirl and low turbulence intensity in the flow field. Recorded data from the floating wind turbine models included rotor torque and position, tower top and base forces and moments, mooring line tensions, six-axis platform motions and accelerations at key locations on the nacelle, tower, and platform. A large number of tests were performed ranging from simple free-decay tests to complex operating conditions with irregular sea states and dynamic winds.« less
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dynamo: A Model Transition Framework for Dynamic Stability Control and Body Mass Manipulation
2011-11-01
driving at high speed, and you turn the steering wheel hard to the right and slam on the brakes, then you will end up in the oversteer regime. At the...sensors (GPS, IMU, LIDAR ) for vehicle control. Figure 17: Dynamo high-speed small UGV hardware platform We will perform experiments to measure the MTC
Ten-kilogram vehicle autonomous operations
NASA Astrophysics Data System (ADS)
Rogers, John R.; Korpela, Christopher; Quigley, Kevin
2009-05-01
A low-cost unmanned ground vehicle designed to benchmark high-speed performance is presented. The E-Maxx four-wheel-drive radio-controlled vehicle equipped with a Robostix controller is proposed as a low-cost, high-speed robotic platform useful for military operations. The vehicle weighs less than ten kilograms making it easily portable by one person. Keeping cost low is a major consideration in the design with the aim of providing a disposable military robot. The suitability of the platform was evaluated and results are presented. Commercial-Off-The-Shelf (COTS) upgrades to the basic vehicle are recommended for durability. A procedure was established for bird's-eye-view video recording to document vehicle dynamics. Driver/vehicle performance is quantified by entry velocity, exit velocity and total time through a 90° turn on low-friction terrain. A setup for measuring these values is presented. Expert drivers use controlled skidding to minimize time through turns and the long term goal of the project is to automate such expert behaviors. Results of vehicle performance under human control are presented and stand as a reference for future autonomy.
Portable fiber-optic taper coupled optical microscopy platform
NASA Astrophysics Data System (ADS)
Wang, Weiming; Yu, Yan; Huang, Hui; Ou, Jinping
2017-04-01
The optical fiber taper coupled with CMOS has advantages of high sensitivity, compact structure and low distortion in the imaging platform. So it is widely used in low light, high speed and X-ray imaging systems. In the meanwhile, the peculiarity of the coupled structure can meet the needs of the demand in microscopy imaging. Toward this end, we developed a microscopic imaging platform based on the coupling of cellphone camera module and fiber optic taper for the measurement of the human blood samples and ascaris lumbricoides. The platform, weighing 70 grams, is based on the existing camera module of the smartphone and a fiber-optic array which providing a magnification factor of 6x.The top facet of the taper, on which samples are placed, serves as an irregular sampling grid for contact imaging. The magnified images of the sample, located on the bottom facet of the fiber, are then projected onto the CMOS sensor. This paper introduces the portable medical imaging system based on the optical fiber coupling with CMOS, and theoretically analyzes the feasibility of the system. The image data and process results either can be stored on the memory or transmitted to the remote medical institutions for the telemedicine. We validate the performance of this cell-phone based microscopy platform using human blood samples and test target, achieving comparable results to a standard bench-top microscope.
NASA Astrophysics Data System (ADS)
Esch, Thomas; Asamer, Hubert; Hirner, Andreas; Marconcini, Mattia; Metz, Annekatrin; Uereyen, Soner; Zeidler, Julian; Boettcher, Martin; Permana, Hans; Boissier, Enguerran; Mathot, Emmanuel; Soukop, Tomas; Balhar, Jakub; Svaton, Vaclav; Kuchar, Stepan
2017-04-01
The Sentinel fleet will provide a so-far unique coverage with Earth Observation (EO) data and therewith new opportunities for the implementation of methodologies to generate innovative geo-information products and services supporting the SDG targets. It is here where the TEP Urban project is supposed to initiate a step change by providing an open and participatory platform that allows any interested user to easily exploit large-volume EO data pools, in particular those of the European Sentinel and the US Landsat missions, and derive thematic geo-information, metrics and indicators related to the status and development of the built environment. Key component of TEP Urban initiative is the implementation of a web-based platform (https://urban-tep.eo.esa.int) employing distributed high-level computing infrastructures and providing key functionalities for i) high-performance access to satellite imagery and other data sources such as statistics or topographic data, ii) state-of-the-art pre-processing, analysis, and visualization techniques, iii) customized development and dissemination of algorithms, products and services, and iv) networking and communication. This contribution introduces the main facts about the TEP Urban platform, including a description of the general objectives, the platform systems design and functionalities, and the available portfolio of products and services that can directly serve the global provision of indicators for SDG targets, in particular related to SDG 11.
Heat-induced symmetry breaking in ant (Hymenoptera: Formicidae) escape behavior
Chung, Yuan-Kai
2017-01-01
The collective egress of social insects is important in dangerous situations such as natural disasters or enemy attacks. Some studies have described the phenomenon of symmetry breaking in ants, with two exits induced by a repellent. However, whether symmetry breaking occurs under high temperature conditions, which are a common abiotic stress, remains unknown. In our study, we deposited a group of Polyrhachis dives ants on a heated platform and counted the number of escaping ants with two identical exits. We discovered that ants asymmetrically escaped through two exits when the temperature of the heated platform was >32.75°C. The degree of asymmetry increased linearly with the temperature of the platform. Furthermore, the higher the temperature of heated platform was, the more ants escaped from the heated platform. However, the number of escaping ants decreased for 3 min when the temperature was higher than the critical thermal limit (39.46°C), which is the threshold for ants to endure high temperature without a loss of performance. Moreover, the ants tended to form small groups to escape from the thermal stress. A preparatory formation of ant grouping was observed before they reached the exit, indicating that the ants actively clustered rather than accidentally gathered at the exits to escape. We suggest that a combination of individual and grouping ants may help to optimize the likelihood of survival during evacuation. PMID:28355235
The touchscreen operant platform for testing learning and memory in rats and mice.
Horner, Alexa E; Heath, Christopher J; Hvoslef-Eide, Martha; Kent, Brianne A; Kim, Chi Hun; Nilsson, Simon R O; Alsiö, Johan; Oomen, Charlotte A; Holmes, Andrew; Saksida, Lisa M; Bussey, Timothy J
2013-10-01
An increasingly popular method of assessing cognitive functions in rodents is the automated touchscreen platform, on which a number of different cognitive tests can be run in a manner very similar to touchscreen methods currently used to test human subjects. This methodology is low stress (using appetitive rather than aversive reinforcement), has high translational potential and lends itself to a high degree of standardization and throughput. Applications include the study of cognition in rodent models of psychiatric and neurodegenerative diseases (e.g., Alzheimer's disease, schizophrenia, Huntington's disease, frontotemporal dementia), as well as the characterization of the role of select brain regions, neurotransmitter systems and genes in rodents. This protocol describes how to perform four touchscreen assays of learning and memory: visual discrimination, object-location paired-associates learning, visuomotor conditional learning and autoshaping. It is accompanied by two further protocols (also published in this issue) that use the touchscreen platform to assess executive function, working memory and pattern separation.
Computer-operated analytical platform for the determination of nutrients in hydroponic systems.
Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier
2014-03-15
Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.
Approaches to a global quantum key distribution network
NASA Astrophysics Data System (ADS)
Islam, Tanvirul; Bedington, Robert; Ling, Alexander
2017-10-01
Progress in realising quantum computers threatens to weaken existing public key encryption infrastructure. A global quantum key distribution (QKD) network can play a role in computational attack-resistant encryption. Such a network could use a constellation of high altitude platforms such as airships and satellites as trusted nodes to facilitate QKD between any two points on the globe on demand. This requires both space-to-ground and inter-platform links. However, the prohibitive cost of traditional satellite based development limits the experimental work demonstrating relevant technologies. To accelerate progress towards a global network, we use an emerging class of shoe-box sized spacecraft known as CubeSats. We have designed a polarization entangled photon pair source that can operate on board CubeSats. The robustness and miniature form factor of our entanglement source makes it especially suitable for performing pathfinder missions that studies QKD between two high altitude platforms. The technological outcomes of such mission would be the essential building blocks for a global QKD network.
Dias-Santagata, Dora; Akhavanfard, Sara; David, Serena S; Vernovsky, Kathy; Kuhlmann, Georgiana; Boisvert, Susan L; Stubbs, Hannah; McDermott, Ultan; Settleman, Jeffrey; Kwak, Eunice L; Clark, Jeffrey W; Isakoff, Steven J; Sequist, Lecia V; Engelman, Jeffrey A; Lynch, Thomas J; Haber, Daniel A; Louis, David N; Ellisen, Leif W; Borger, Darrell R; Iafrate, A John
2010-01-01
Targeted cancer therapy requires the rapid and accurate identification of genetic abnormalities predictive of therapeutic response. We sought to develop a high-throughput genotyping platform that would allow prospective patient selection to the best available therapies, and that could readily and inexpensively be adopted by most clinical laboratories. We developed a highly sensitive multiplexed clinical assay that performs very well with nucleic acid derived from formalin fixation and paraffin embedding (FFPE) tissue, and tests for 120 previously described mutations in 13 cancer genes. Genetic profiling of 250 primary tumours was consistent with the documented oncogene mutational spectrum and identified rare events in some cancer types. The assay is currently being used for clinical testing of tumour samples and contributing to cancer patient management. This work therefore establishes a platform for real-time targeted genotyping that can be widely adopted. We expect that efforts like this one will play an increasingly important role in cancer management. PMID:20432502
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Field results from a new die-to-database reticle inspection platform
NASA Astrophysics Data System (ADS)
Broadbent, William; Yokoyama, Ichiro; Yu, Paul; Seki, Kazunori; Nomura, Ryohei; Schmalfuss, Heiko; Heumann, Jan; Sier, Jean-Paul
2007-05-01
A new die-to-database high-resolution reticle defect inspection platform, TeraScanHR, has been developed for advanced production use with the 45nm logic node, and extendable for development use with the 32nm node (also the comparable memory nodes). These nodes will use predominantly ArF immersion lithography although EUV may also be used. According to recent surveys, the predominant reticle types for the 45nm node are 6% simple tri-tone and COG. Other advanced reticle types may also be used for these nodes including: dark field alternating, Mask Enhancer, complex tri-tone, high transmission, CPL, etc. Finally, aggressive model based OPC will typically be used which will include many small structures such as jogs, serifs, and SRAF (sub-resolution assist features) with accompanying very small gaps between adjacent structures. The current generation of inspection systems is inadequate to meet these requirements. The architecture and performance of the new TeraScanHR reticle inspection platform is described. This new platform is designed to inspect the aforementioned reticle types in die-to-database and die-to-die modes using both transmitted and reflected illumination. Recent results from field testing at two of the three beta sites are shown (Toppan Printing in Japan and the Advanced Mask Technology Center in Germany). The results include applicable programmed defect test reticles and advanced 45nm product reticles (also comparable memory reticles). The results show high sensitivity and low false detections being achieved. The platform can also be configured for the current 65nm, 90nm, and 130nm nodes.
Linking molecular changes at multiple levels of biological organization using “omic” methods provides highly complimentary, “data-dense” information for predicting outcomes for organisms exposed to environmental contaminants. However, performing separate ...
NASA Astrophysics Data System (ADS)
Liu, Meng-Wei; Chang, Hao-Jung; Lee, Shu-sheng; Lee, Chih-Kung
2016-03-01
Tuberculosis is a highly contagious disease such that global latent patient can be as high as one third of the world population. Currently, latent tuberculosis was diagnosed by stimulating the T cells to produce the biomarker of tuberculosis, i.e., interferon-γ. In this paper, we developed a paraboloidal mirror enabled surface plasmon resonance (SPR) interferometer that has the potential to also integrate ellipsometry to analyze the antibody and antigen reactions. To examine the feasibility of developing a platform for cross calibrating the performance and detection limit of various bio-detection techniques, electrochemical impedance spectroscopy (EIS) method was also implemented onto a biochip that can be incorporated into this newly developed platform. The microfluidic channel of the biochip was functionalized by coating the interferon-γ antibody so as to enhance the detection specificity. To facilitate the processing steps needed for using the biochip to detect various antigen of vastly different concentrations, a kinetic mount was also developed to guarantee the biochip re-positioning accuracy whenever the biochip was removed and placed back for another round of detection. With EIS being utilized, SPR was also adopted to observe the real-time signals on the computer in order to analyze the success of each biochip processing steps such as functionalization, wash, etc. Finally, the EIS results and the optical signals obtained from the newly developed optical detection platform was cross-calibrated. Preliminary experimental results demonstrate the accuracy and performance of SPR and EIS measurement done at the newly integrated platform.
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Reconfigurable, Cognitive Software-Defined Radio
NASA Technical Reports Server (NTRS)
Bhat, Arvind
2015-01-01
Software-defined radio (SDR) technology allows radios to be reconfigured to perform different communication functions without using multiple radios to accomplish each task. Intelligent Automation, Inc., has developed SDR platforms that switch adaptively between different operation modes. The innovation works by modifying both transmit waveforms and receiver signal processing tasks. In Phase I of the project, the company developed SDR cognitive capabilities, including adaptive modulation and coding (AMC), automatic modulation recognition (AMR), and spectrum sensing. In Phase II, these capabilities were integrated into SDR platforms. The reconfigurable transceiver design employs high-speed field-programmable gate arrays, enabling multimode operation and scalable architecture. Designs are based on commercial off-the-shelf (COTS) components and are modular in nature, making it easier to upgrade individual components rather than redesigning the entire SDR platform as technology advances.
Development of a New Robotic Ankle Rehabilitation Platform for Hemiplegic Patients after Stroke
Duan, Lihong
2018-01-01
A large amount of hemiplegic survivors are suffering from motor impairment. Ankle rehabilitation exercises act an important role in recovering patients' walking ability after stroke. Currently, patients mainly perform ankle exercise to reobtain range of motion (ROM) and strength of the ankle joint under a therapist's assistance by manual operation. However, therapists suffer from high work intensity, and most of the existed rehabilitation devices focus on ankle functional training and ignore the importance of neurological rehabilitation in the early hemiplegic stage. In this paper, a new robotic ankle rehabilitation platform (RARP) is proposed to assist patients in executing ankle exercise. The robotic platform consists of two three-DOF symmetric layer-stacking mechanisms, which can execute ankle internal/external rotation, dorsiflexion/plantarflexion, and inversion/eversion exercise while the rotation center of the distal zone of the robotic platform always coincides with patients' ankle pivot center. Three exercise modes including constant-speed exercise, constant torque-impedance exercise, and awareness exercise are developed to execute ankle training corresponding to different rehabilitation stages. Experiments corresponding to these three ankle exercise modes are performed, the result demonstrated that the RARP is capable of executing ankle rehabilitation, and the novel awareness exercise mode motivates patients to proactively participate in ankle training. PMID:29736231
A versatile modular bioreactor platform for Tissue Engineering
Schuerlein, Sebastian; Schwarz, Thomas; Krziminski, Steffan; Gätzner, Sabine; Hoppensack, Anke; Schwedhelm, Ivo; Schweinlin, Matthias; Walles, Heike
2016-01-01
Abstract Tissue Engineering (TE) bears potential to overcome the persistent shortage of donor organs in transplantation medicine. Additionally, TE products are applied as human test systems in pharmaceutical research to close the gap between animal testing and the administration of drugs to human subjects in clinical trials. However, generating a tissue requires complex culture conditions provided by bioreactors. Currently, the translation of TE technologies into clinical and industrial applications is limited due to a wide range of different tissue‐specific, non‐disposable bioreactor systems. To ensure a high level of standardization, a suitable cost‐effectiveness, and a safe graft production, a generic modular bioreactor platform was developed. Functional modules provide robust control of culture processes, e.g. medium transport, gas exchange, heating, or trapping of floating air bubbles. Characterization revealed improved performance of the modules in comparison to traditional cell culture equipment such as incubators, or peristaltic pumps. By combining the modules, a broad range of culture conditions can be achieved. The novel bioreactor platform allows using disposable components and facilitates tissue culture in closed fluidic systems. By sustaining native carotid arteries, engineering a blood vessel, and generating intestinal tissue models according to a previously published protocol the feasibility and performance of the bioreactor platform was demonstrated. PMID:27492568
QCL-based standoff and proximal chemical detectors
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Hensley, Joel; Cosofret, Bogdan R.; Konno, Daisei; Mulhall, Phillip; Schmit, Thomas; Chang, Shing; Allen, Mark; Marinelli, William J.
2016-05-01
The development of two longwave infrared quantum cascade laser (QCL) based surface contaminant detection platforms supporting government programs will be discussed. The detection platforms utilize reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. Operation at standoff (10s of m) and proximal (1 m) ranges will be reviewed with consideration given to the spectral signatures contained in the specular and diffusely reflected components of the signal. The platforms comprise two variants: Variant 1 employs a spectrally tunable QCL source with a broadband imaging detector, and Variant 2 employs an ensemble of broadband QCLs with a spectrally selective detector. Each variant employs a version of the Adaptive Cosine Estimator for detection and discrimination in high clutter environments. Detection limits of 5 μg/cm2 have been achieved through speckle reduction methods enabling detector noise limited performance. Design considerations for QCL-based standoff and proximal surface contaminant detectors are discussed with specific emphasis on speckle-mitigated and detector noise limited performance sufficient for accurate detection and discrimination regardless of the surface coverage morphology or underlying surface reflectivity. Prototype sensors and developmental test results will be reviewed for a range of application scenarios. Future development and transition plans for the QCL-based surface detector platforms are discussed.
Aspects of detection and tracking of ground targets from an airborne EO/IR sensor
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Daya, Zahir; Kirubarajan, Thiagalingam
2015-05-01
An airborne EO/IR (electro-optical/infrared) camera system comprises of a suite of sensors, such as a narrow and wide field of view (FOV) EO and mid-wave IR sensors. EO/IR camera systems are regularly employed on military and search and rescue aircrafts. The EO/IR system can be used to detect and identify objects rapidly in daylight and at night, often with superior performance in challenging conditions such as fog. There exist several algorithms for detecting potential targets in the bearing elevation grid. The nonlinear filtering problem is one of estimation of the kinematic parameters from bearing and elevation measurements from a moving platform. In this paper, we developed a complete model for the state of a target as detected by an airborne EO/IR system and simulated a typical scenario with single target with 1 or 2 airborne sensors. We have demonstrated the ability to track the target with `high precision' and noted the improvement from using two sensors on a single platform or on separate platforms. The performance of the Extended Kalman filter (EKF) is investigated on simulated data. Image/video data collected from an IR sensor on an airborne platform are processed using an image tracking by detection algorithm.
A versatile modular bioreactor platform for Tissue Engineering.
Schuerlein, Sebastian; Schwarz, Thomas; Krziminski, Steffan; Gätzner, Sabine; Hoppensack, Anke; Schwedhelm, Ivo; Schweinlin, Matthias; Walles, Heike; Hansmann, Jan
2017-02-01
Tissue Engineering (TE) bears potential to overcome the persistent shortage of donor organs in transplantation medicine. Additionally, TE products are applied as human test systems in pharmaceutical research to close the gap between animal testing and the administration of drugs to human subjects in clinical trials. However, generating a tissue requires complex culture conditions provided by bioreactors. Currently, the translation of TE technologies into clinical and industrial applications is limited due to a wide range of different tissue-specific, non-disposable bioreactor systems. To ensure a high level of standardization, a suitable cost-effectiveness, and a safe graft production, a generic modular bioreactor platform was developed. Functional modules provide robust control of culture processes, e.g. medium transport, gas exchange, heating, or trapping of floating air bubbles. Characterization revealed improved performance of the modules in comparison to traditional cell culture equipment such as incubators, or peristaltic pumps. By combining the modules, a broad range of culture conditions can be achieved. The novel bioreactor platform allows using disposable components and facilitates tissue culture in closed fluidic systems. By sustaining native carotid arteries, engineering a blood vessel, and generating intestinal tissue models according to a previously published protocol the feasibility and performance of the bioreactor platform was demonstrated. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Heightened sense for sensing: recent advances in pathogen immunoassay sensing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, N; Tarasow, T; Tok, J B
2007-01-09
As part of its own defense mechanism, most bacteria have developed an innate ability to enable toxic secretion to ward off potential predators or invaders. However, this naturally occurring process has been abused since over production of the bacteria's toxin molecules could render them as potential bioweapons. As these processes (also known as ''black biology'') can be clandestinely performed in a laboratory, the threat of inflicting enormous potential damage to a nation's security and economy is invariably clear and present. Thus, efficient detection of these biothreat agents in a timely and accurate manner is highly desirable. A wealth of publicationsmore » describing various pathogen immuno-sensing advances has appeared over the last few years, and it is not the intent of this review article to detail each reported approach. Instead, we aim to survey a few recent highlights in hopes of providing the reader an overall sense of the breath of these sensing systems and platforms. Antigen targets are diverse and complex as they encompass proteins, whole viruses, and bacterial spores. The signaling processes for these reported immunoassays are usually based on colorimetric, optical, or electrochemical changes. Of equal interest is the type of platform in which the immunoassay can be performed. A few platforms suitable for pathogen detection are described.« less
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400
What is the Most Sensitive Measure of Water Maze Probe Test Performance?
Maei, Hamid R.; Zaslavsky, Kirill; Teixeira, Cátia M.; Frankland, Paul W.
2009-01-01
The water maze is commonly used to assay spatial cognition, or, more generally, learning and memory in experimental rodent models. In the water maze, mice or rats are trained to navigate to a platform located below the water's surface. Spatial learning is then typically assessed in a probe test, where the platform is removed from the pool and the mouse or rat is allowed to search for it. Performance in the probe test may then be evaluated using either occupancy-based (percent time in a virtual quadrant [Q] or zone [Z] centered on former platform location), error-based (mean proximity to former platform location [P]) or counting-based (platform crossings [X]) measures. While these measures differ in their popularity, whether they differ in their ability to detect group differences is not known. To address this question we compiled five separate databases, containing more than 1600 mouse probe tests. Random selection of individual trials from respective databases then allowed us to simulate experiments with varying sample and effect sizes. Using this Monte Carlo-based method, we found that the P measure consistently outperformed the Q, Z and X measures in its ability to detect group differences. This was the case regardless of sample or effect size, and using both parametric and non-parametric statistical analyses. The relative superiority of P over other commonly used measures suggests that it is the most appropriate measure to employ in both low- and high-throughput water maze screens. PMID:19404412
CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.
Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C
2014-01-01
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
Diversity Performance Analysis on Multiple HAP Networks
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-01-01
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102
Hung, Tran Quang; Chin, Wai Hoe; Sun, Yi; Wolff, Anders; Bang, Dang Duong
2017-04-15
Solid-phase PCR (SP-PCR) has become increasingly popular for molecular diagnosis and there have been a few attempts to incorporate SP-PCR into lab-on-a-chip (LOC) devices. However, their applicability for on-line diagnosis is hindered by the lack of sensitive and portable on-chip optical detection technology. In this paper, we addressed this challenge by combining the SP-PCR with super critical angle fluorescence (SAF) microlens array embedded in a microchip. We fabricated miniaturized SAF microlens array as part of a microfluidic chamber in thermoplastic material and performed multiplexed SP-PCR directly on top of the SAF microlens array. Attribute to the high fluorescence collection efficiency of the SAF microlens array, the SP-PCR assay on the LOC platform demonstrated a high sensitivity of 1.6 copies/µL, comparable to off-chip detection using conventional laser scanner. The combination of SP-PCR and SAF microlens array allows for on-chip highly sensitive and multiplexed pathogen detection with low-cost and compact optical components. The LOC platform would be widely used as a high-throughput biosensor to analyze food, clinical and environmental samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, C C; Chang, M W; Chang, C P; Chan, S C; Chang, W Y; Yang, C L; Lin, M T
2014-10-01
We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use.
LEO to GEO (and Beyond) Transfers Using High Power Solar Electric Propulsion (HP-SEP)
NASA Technical Reports Server (NTRS)
Loghry, Christopher S.; Oleson, Steven R.; Woytach, Jeffrey M.; Martini, Michael C.; Smith, David A.; Fittje, James E.; Gyekenyesi, John Z.; Colozza, Anthony J.; Fincannon, James; Bogner, Aimee;
2017-01-01
Rideshare, or Multi-Payload launch configurations, are becoming more and more commonplace but access to space is only one part of the overall mission needs. The ability for payloads to achieve their target orbits or destinations can still be difficult and potentially not feasible with on-board propulsion limitations. The High Power Solar Electric Propulsion (HP-SEP) Orbital Maneuvering Vehicle (OMV) provides transfer capabilities for both large and small payload in excess of what is possible with chemical propulsion. Leveraging existing secondary payload adapter technology like the ESPA provides a platform to support Multi-Payload launch and missions. When coupled with HP-SEP, meaning greater than 30 kW system power, very large delta-V maneuvers can be accomplished. The HP-SEP OMV concept is designed to perform a Low Earth Orbit to Geosynchronous Orbit (LEO-GEO) transfer of up to six payloads each with 300kg mass. The OMV has enough capability to perform this 6 kms maneuver and have residual capacity to extend an additional transfer from GEO to Lunar orbit. This high deltaV capability is achieved using state of the art 12.5kW Hall Effect Thrusters (HET) coupled with high power roll up solar arrays. The HP-SEP OMV also provides a demonstration platform for other SEP technologies such as advanced Power Processing Units (PPU), Xenon Feed Systems (XFS), and other HET technologies. The HP-SEP OMV platform can be leveraged for other missions as well such as interplanetary science missions and applications for resilient space architectures.
Chen, C.C.; Chang, M.W.; Chang, C.P.; Chan, S.C.; Chang, W.Y.; Yang, C.L.; Lin, M.T.
2014-01-01
We developed a forced non-electric-shock running wheel (FNESRW) system that provides rats with high-intensity exercise training using automatic exercise training patterns that are controlled by a microcontroller. The proposed system successfully makes a breakthrough in the traditional motorized running wheel to allow rats to perform high-intensity training and to enable comparisons with the treadmill at the same exercise intensity without any electric shock. A polyvinyl chloride runway with a rough rubber surface was coated on the periphery of the wheel so as to permit automatic acceleration training, and which allowed the rats to run consistently at high speeds (30 m/min for 1 h). An animal ischemic stroke model was used to validate the proposed system. FNESRW, treadmill, control, and sham groups were studied. The FNESRW and treadmill groups underwent 3 weeks of endurance running training. After 3 weeks, the experiments of middle cerebral artery occlusion, the modified neurological severity score (mNSS), an inclined plane test, and triphenyltetrazolium chloride were performed to evaluate the effectiveness of the proposed platform. The proposed platform showed that enhancement of motor function, mNSS, and infarct volumes was significantly stronger in the FNESRW group than the control group (P<0.05) and similar to the treadmill group. The experimental data demonstrated that the proposed platform can be applied to test the benefit of exercise-preconditioning-induced neuroprotection using the animal stroke model. Additional advantages of the FNESRW system include stand-alone capability, independence of subjective human adjustment, and ease of use. PMID:25140816
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Application of the GNU Radio platform in the multistatic radar
NASA Astrophysics Data System (ADS)
Szlachetko, Boguslaw; Lewandowski, Andrzej
2009-06-01
This document presents the application of the Software Defined Radio-based platform in the multistatic radar. This platform consists of four-sensor linear antenna, Universal Software Radio Peripheral (USRP) hardware (radio frequency frontend) and GNU-Radio PC software. The paper provides information about architecture of digital signal processing performed by USRP's FPGA (digital down converting blocks) and PC host (implementation of the multichannel digital beamforming). The preliminary results of the signal recording performed by our experimental platform are presented.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-04-10
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-01-01
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost. PMID:28394306
Blueberry supplementation improves memory in middle-aged mice fed a high-fat diet.
Carey, Amanda N; Gomes, Stacey M; Shukitt-Hale, Barbara
2014-05-07
Consuming a high-fat diet may result in behavioral deficits similar to those observed in aging animals. It has been demonstrated that blueberry supplementation can allay age-related behavioral deficits. To determine if supplementation of a high-fat diet with blueberries offers protection against putative high-fat diet-related declines, 9-month-old C57Bl/6 mice were maintained on low-fat (10% fat calories) or high-fat (60% fat calories) diets with and without 4% freeze-dried blueberry powder. Novel object recognition memory was impaired by the high-fat diet; after 4 months on the high-fat diet, mice spent 50% of their time on the novel object in the testing trial, performing no greater than chance performance. Blueberry supplementation prevented recognition memory deficits after 4 months on the diets, as mice on this diet spent 67% of their time on the novel object. After 5 months on the diets, mice consuming the high-fat diet passed through the platform location less often than mice on low-fat diets during probe trials on days 2 and 3 of Morris water maze testing, whereas mice consuming the high-fat blueberry diet passed through the platform location as often as mice on the low-fat diets. This study is a first step in determining if incorporating more nutrient-dense foods into a high-fat diet can allay cognitive dysfunction.
Controlling Differentiation of Stem Cells for Developing Personalized Organ-on-Chip Platforms.
Geraili, Armin; Jafari, Parya; Hassani, Mohsen Sheikh; Araghi, Behnaz Heidary; Mohammadi, Mohammad Hossein; Ghafari, Amir Mohammad; Tamrin, Sara Hasanpour; Modarres, Hassan Pezeshgi; Kolahchi, Ahmad Rezaei; Ahadian, Samad; Sanati-Nezhad, Amir
2018-01-01
Organ-on-chip (OOC) platforms have attracted attentions of pharmaceutical companies as powerful tools for screening of existing drugs and development of new drug candidates. OOCs have primarily used human cell lines or primary cells to develop biomimetic tissue models. However, the ability of human stem cells in unlimited self-renewal and differentiation into multiple lineages has made them attractive for OOCs. The microfluidic technology has enabled precise control of stem cell differentiation using soluble factors, biophysical cues, and electromagnetic signals. This study discusses different tissue- and organ-on-chip platforms (i.e., skin, brain, blood-brain barrier, bone marrow, heart, liver, lung, tumor, and vascular), with an emphasis on the critical role of stem cells in the synthesis of complex tissues. This study further recaps the design, fabrication, high-throughput performance, and improved functionality of stem-cell-based OOCs, technical challenges, obstacles against implementing their potential applications, and future perspectives related to different experimental platforms. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spiral: Automated Computing for Linear Transforms
NASA Astrophysics Data System (ADS)
Püschel, Markus
2010-09-01
Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.
A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data
NASA Astrophysics Data System (ADS)
Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.
2017-12-01
Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.
ePix: a class of architectures for second generation LCLS cameras
Dragone, A.; Caragiulo, P.; Markovic, B.; ...
2014-03-31
ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less
Reconfigurable microfluidic hanging drop network for multi-tissue interaction and analysis.
Frey, Olivier; Misun, Patrick M; Fluri, David A; Hengstler, Jan G; Hierlemann, Andreas
2014-06-30
Integration of multiple three-dimensional microtissues into microfluidic networks enables new insights in how different organs or tissues of an organism interact. Here, we present a platform that extends the hanging-drop technology, used for multi-cellular spheroid formation, to multifunctional complex microfluidic networks. Engineered as completely open, 'hanging' microfluidic system at the bottom of a substrate, the platform features high flexibility in microtissue arrangements and interconnections, while fabrication is simple and operation robust. Multiple spheroids of different cell types are formed in parallel on the same platform; the different tissues are then connected in physiological order for multi-tissue experiments through reconfiguration of the fluidic network. Liquid flow is precisely controlled through the hanging drops, which enable nutrient supply, substance dosage and inter-organ metabolic communication. The possibility to perform parallelized microtissue formation on the same chip that is subsequently used for complex multi-tissue experiments renders the developed platform a promising technology for 'body-on-a-chip'-related research.
Aloisio, Elena; Carnevale, Assunta; Pasqualetti, Sara; Birindelli, Sarah; Dolci, Alberto; Panteghini, Mauro
2018-01-16
Automatic photometric determination of the hemolysis index (HI) on serum and plasma samples is central to detect potential interferences of in vitro hemolysis on laboratory tests. When HI is above an established cut-off for interference, results may suffer from a significant bias and undermine clinical reliability of the test. Despite its undeniable importance for patient safety, the analytical performance of HI estimation is not usually checked in laboratories. Here we evaluated for the first time the random source of measurement uncertainty of HI determination on the two Abbott Architect c16000 platforms in use in our laboratory. From January 2016 to September 2017, we collected data from daily photometric determination of HI on a fresh-frozen serum pool with a predetermined HI value of ~100 (corresponding to ~1g/L of free hemoglobin). Monthly and cumulative CVs were calculated. During 21months, 442 and 451 measurements were performed on the two platforms, respectively. Monthly CVs ranged from 0.7% to 2.7% on c16000-1 and from 0.8% to 2.5% on c16000-2, with a between-platform cumulative CV of 1.82% (corresponding to an expanded uncertainty of 3.64%). Mean HI values on the two platforms were just slightly biased (101.3 vs. 103.1, 1.76%), but, due to the high precision of measurements, this difference assumed statistical significance (p<0.0001). Even though no quality specifications are available to date, our study shows that the HI measurement on Architect c16000 platform has nice reproducibility that could be considered in establishing the state of the art of the measurement. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Hardie, Diana Ruth; Korsman, Stephen N; Hsiao, Nei-Yuan; Morobadi, Molefi Daniel; Vawda, Sabeehah; Goedhals, Dominique
2017-01-01
In South Africa where the prevalence of HIV infection is very high, 4th generation HIV antibody/p24 antigen combo immunoassays are the tests of choice for laboratory based screening. Testing is usually performed in clinical pathology laboratories on automated analysers. To investigate the cause of false positive results on 4th generation HIV testing platforms in public sector laboratories, the performance of two automated platforms was compared in a clinical pathology setting, firstly on routine diagnostic specimens and secondly on known sero-negative samples. Firstly, 1181 routine diagnostic specimens were sequentially tested on Siemens and Roche automated 4th generation platforms. HIV viral load, western blot and follow up testing were used to determine the true status of inconclusive specimens. Subsequently, known HIV seronegative samples from a single donor were repeatedly tested on both platforms and an analyser was tested for surface contamination with HIV positive serum to identify how suspected specimen contamination could be occurring. Serial testing of diagnostic specimens yielded 163 weakly positive or discordant results. Only 3 of 163 were conclusively shown to indicate true HIV infection. Specimen contamination with HIV antibody was suspected, based on the following evidence: the proportion of positive specimens increased on repeated passage through the analysers; viral loads were low or undetectable and western blots negative or indeterminate on problem specimens; screen negative, 2nd test positive specimens tested positive when reanalysed on the screening assay; follow up specimens (where available) were negative. Similarly, an increasing number of known negative specimens became (repeatedly) sero-positive on serial passage through one of the analysers. Internal and external analyser surfaces were contaminated with HIV serum, evidence that sample splashes occur during testing. Due to the extreme sensitivity of these assays, contamination with minute amounts of HIV antibody can cause a negative sample to test positive. Better contamination control measures are needed on analysers used in clinical pathology environments, especially in regions where HIV sero-prevalence is high.
NASA Astrophysics Data System (ADS)
Li, J.; Zhang, T.; Huang, Q.; Liu, Q.
2014-12-01
Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.
Modeling and simulation of continuous wave velocity radar based on third-order DPLL
NASA Astrophysics Data System (ADS)
Di, Yan; Zhu, Chen; Hong, Ma
2015-02-01
Second-order digital phase-locked-loop (DPLL) is widely used in traditional Continuous wave (CW) velocity radar with poor performance in high dynamic conditions. Using the third-order DPLL can improve the performance. Firstly, the echo signal model of CW radar is given. Secondly, theoretical derivations of the tracking performance in different velocity conditions are given. Finally, simulation model of CW radar is established based on Simulink tool. Tracking performance of the two kinds of DPLL in different acceleration and jerk conditions is studied by this model. The results show that third-order PLL has better performance in high dynamic conditions. This model provides a platform for further research of CW radar.
NASA Astrophysics Data System (ADS)
Arafa, Safia; Bouchemat, Mohamed; Bouchemat, Touraya; Benmerkhi, Ahlem; Hocini, Abdesselam
2017-02-01
A Bio-sensing platform based on an infiltrated photonic crystal ring shaped holes cavity-coupled waveguide system is proposed for glucose concentration detection. Considering silicon-on-insulator (SOI) technology, it has been demonstrated that the ring shaped holes configuration provides an excellent optical confinement within the cavity region, which further enhances the light-matter interactions at the precise location of the analyte medium. Thus, the sensitivity and the quality factor (Q) can be significantly improved. The transmission characteristics of light in the biosensor under different refractive indices that correspond to the change in the analyte glucose concentration are analyzed by performing finite-difference time-domain (FDTD) simulations. Accordingly, an improved sensitivity of 462 nm/RIU and a Q factor as high as 1.11х105 have been achieved, resulting in a detection limit of 3.03х10-6 RIU. Such combination of attributes makes the designed structure a promising element for performing label-free biosensing in medical diagnosis and environmental monitoring.
NASA Astrophysics Data System (ADS)
Bruschetta, M.; Maran, F.; Beghi, A.
2017-06-01
The use of dynamic driving simulators is constantly increasing in the automotive community, with applications ranging from vehicle development to rehab and driver training. The effectiveness of such devices is related to their capabilities of well reproducing the driving sensations, hence it is crucial that the motion control strategies generate both realistic and feasible inputs to the platform. Such strategies are called motion cueing algorithms (MCAs). In recent years several MCAs based on model predictive control (MPC) techniques have been proposed. The main drawback associated with the use of MPC is its computational burden, that may limit their application to high performance dynamic simulators. In the paper, a fast, real-time implementation of an MPC-based MCA for 9 DOF, high performance platform is proposed. Effectiveness of the approach in managing the available working area is illustrated by presenting experimental results from an implementation on a real device with a 200 Hz control frequency.
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
NASA Technical Reports Server (NTRS)
Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel
2015-01-01
The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cong, Yongzheng; Katipamula, Shanta; Geng, Tao
2016-02-01
A microfluidic platform was developed to perform online electrokinetic sample preconcentration and rapid hydrodynamic sample injection for electrophoresis using a single microvalve. The PDMS microchip consists of a separation channel, a side channel for sample introduction, and a control channel which is used as a pneumatic microvalve aligned at the intersection of the two flow channels. The closed microvalve, created by multilayer soft lithography, can serve as a preconcentrator under an applied electric potential, enabling current to pass through while blocking bulk flow. Once analytes are concentrated, the valve is briefly opened and the stacked sample is pressure injected intomore » the separation channel for electrophoretic separation. Fluorescently labeled peptides were enriched by a factor of ~450 in 230 s. The performance of the platform was validated by the online preconcentration, injection and electrophoretic separation of fluorescently labeled peptides. This method enables both rapid analyte concentration and controlled injection volume for high sensitivity, high resolution capillary electrophoresis.« less
Bechstein, Daniel J B; Lee, Jung-Rok; Ooi, Chin Chun; Gani, Adi W; Kim, Kyunglok; Wilson, Robert J; Wang, Shan X
2015-06-30
Magnetic biosensors have emerged as a sensitive and versatile platform for high performance medical diagnostics. These magnetic biosensors require well-tailored magnetic particles as detection probes, which need to give rise to a large and specific biological signal while showing very low nonspecific binding. This is especially important in wash-free bioassay protocols, which do not require removal of particles before measurement, often a necessity in point of care diagnostics. Here we show that magnetic interactions between magnetic particles and magnetized sensors dramatically impact particle transport and magnetic adhesion to the sensor surfaces. We investigate the dynamics of magnetic particles' biomolecular binding and magnetic adhesion to the sensor surface using microfluidic experiments. We elucidate how flow forces can inhibit magnetic adhesion, greatly diminishing or even eliminating nonspecific signals in wash-free magnetic bioassays, and enhancing signal to noise ratios by several orders of magnitude. Our method is useful for selecting and optimizing magnetic particles for a wide range of magnetic sensor platforms.
High performance wash-free magnetic bioassays through microfluidically enhanced particle specificity
Bechstein, Daniel J.B.; Lee, Jung-Rok; Ooi, Chin Chun; Gani, Adi W.; Kim, Kyunglok; Wilson, Robert J.; Wang, Shan X.
2015-01-01
Magnetic biosensors have emerged as a sensitive and versatile platform for high performance medical diagnostics. These magnetic biosensors require well-tailored magnetic particles as detection probes, which need to give rise to a large and specific biological signal while showing very low nonspecific binding. This is especially important in wash-free bioassay protocols, which do not require removal of particles before measurement, often a necessity in point of care diagnostics. Here we show that magnetic interactions between magnetic particles and magnetized sensors dramatically impact particle transport and magnetic adhesion to the sensor surfaces. We investigate the dynamics of magnetic particles’ biomolecular binding and magnetic adhesion to the sensor surface using microfluidic experiments. We elucidate how flow forces can inhibit magnetic adhesion, greatly diminishing or even eliminating nonspecific signals in wash-free magnetic bioassays, and enhancing signal to noise ratios by several orders of magnitude. Our method is useful for selecting and optimizing magnetic particles for a wide range of magnetic sensor platforms. PMID:26123868
A droplet microfluidics platform for rapid microalgal growth and oil production analysis.
Kim, Hyun Soo; Guzman, Adrian R; Thapa, Hem R; Devarenne, Timothy P; Han, Arum
2016-08-01
Microalgae have emerged as a promising source for producing future renewable biofuels. Developing better microalgal strains with faster growth and higher oil production rates is one of the major routes towards economically viable microalgal biofuel production. In this work, we present a droplet microfluidics-based microalgae analysis platform capable of measuring growth and oil content of various microalgal strains with single-cell resolution in a high-throughput manner. The platform allows for encapsulating a single microalgal cell into a water-in-oil emulsion droplet and tracking the growth and division of the encapsulated cell over time, followed by on-chip oil quantification. The key feature of the developed platform is its capability to fluorescently stain microalgae within microdroplets for oil content quantification. The performance of the developed platform was characterized using the unicellular microalga Chlamydomonas reinhardtii and the colonial microalga Botryococcus braunii. The application of the platform in quantifying growth and oil accumulation was successfully confirmed using C. reinhardtii under different culture conditions, namely nitrogen-replete and nitrogen-limited conditions. These results demonstrate the capability of this platform as a rapid screening tool that can be applied to a wide range of microalgal strains for analyzing growth and oil accumulation characteristics relevant to biofuel strain selection and development. Biotechnol. Bioeng. 2016;113: 1691-1701. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
ADX: a high field, high power density, Advanced Divertor test eXperiment
NASA Astrophysics Data System (ADS)
Vieira, R.; Labombard, B.; Marmar, E.; Irby, J.; Shiraiwa, S.; Terry, J.; Wallace, G.; Whyte, D. G.; Wolfe, S.; Wukitch, S.; ADX Team
2014-10-01
The MIT PSFC and collaborators are proposing an advanced divertor experiment (ADX) - a tokamak specifically designed to address critical gaps in the world fusion research program on the pathway to FNSF/DEMO. This high field (6.5 tesla, 1.5 MA), high power density (P/S ~ 1.5 MW/m2) facility would utilize Alcator magnet technology to test innovative divertor concepts for next-step DT fusion devices (FNSF, DEMO) at reactor-level boundary plasma pressures and parallel heat flux densities while producing high performance core plasma conditions. The experimental platform would also test advanced lower hybrid current drive (LHCD) and ion-cyclotron range of frequency (ICRF) actuators and wave physics at the plasma densities and magnetic field strengths of a DEMO, with the unique ability to deploy launcher structures both on the low-magnetic-field side and the high-field side - a location where energetic plasma-material interactions can be controlled and wave physics is most favorable for efficient current drive, heating and flow drive. This innovative experiment would perform plasma science and technology R&D necessary to inform the conceptual development and accelerate the readiness-for-deployment of FNSF/DEMO - in a timely manner, on a cost-effective research platform. Supported by DE-FC02-99ER54512.
Contemporary engagement with social media amongst hernia surgery specialists.
Lui, D H; McDonald, J J; de Beaux, A; Tulloh, B; Brady, R R W
2017-08-01
Healthcare professional engagement is increasing. This study aims to identify levels of adoption and engagement of several social media platforms by a large international cohort of hernia surgery specialists. Hernia specialists attending the 38th International Congress of the European Hernia Society were identified. A manual search was then performed on Twitter, ResearchGate, and LinkedIn to identify those who had named accounts. Where accounts were identified, data on markers of utilisation were assessed. 759 surgeons (88.5% male) from 57 countries were identified. 334 surgeons (44%) engaged with a social media platform. 39 (5.1%) had Twitter accounts, 189 (24.9%) had ResearchGate accounts and 265 (34.9%) had LinkedIn accounts. 137 surgeons (18.1%) had accounts on 2 or more social media platforms. There was no gender association with social media account ownership (p > 0.05). Engagement in one social media platform was associated with increased engagement and utilisation on other platforms; LinkedIn users were more likely to have Twitter accounts (p < 0.001) and ResearchGate profiles (p < 0.001). Surgeons on all three SM platforms were more likely to have high markers of engagement across all SM platforms (multiple outcomes, p < 0.05). Geographical variation was noted with UK and South American Surgeons being more likely to be present on Twitter than their counterparts (p = 0.031). The level of engagement with social media amongst Hernia surgeons is similar to other surgical specialities. Geographical variation in SM engagement is seen. Engagement with one SM platform is associated with presence on multiple platforms.
Development of jacket platform tsunami risk rating system in waters offshore North Borneo
NASA Astrophysics Data System (ADS)
Lee, H. E.; Liew, M. S.; Mardi, N. H.; Na, K. L.; Toloue, Iraj; Wong, S. K.
2016-09-01
This work details the simulation of tsunami waves generated by seaquakes in the Manila Trench and their effect on fixed oil and gas jacket platforms in waters offshore North Borneo. For this study, a four-leg living quarter jacket platform located in a water depth of 63m is modelled in SACS v5.3. Malaysia has traditionally been perceived to be safe from the hazards of earthquakes and tsunamis. Local design practices tend to neglect tsunami waves and include no such provisions. In 2004, a 9.3 M w seaquake occurred off the northwest coast of Aceh, which generated tsunami waves that caused destruction in Malaysia totalling US 25 million and 68 deaths. This event prompted an awareness of the need to study the reliability of fixed offshore platforms scattered throughout Malaysian waters. In this paper, we present a review of research on the seismicity of the Manila Trench, which is perceived to be high risk for Southeast Asia. From the tsunami numerical model TUNA-M2, we extract computer-simulated tsunami waves at prescribed grid points in the vicinity of the platforms in the region. Using wave heights as input, we simulate the tsunami using SACS v5.3 structural analysis software of offshore platforms, which is widely accepted by the industry. We employ the nonlinear solitary wave theory in our tsunami loading calculations for the platforms, and formulate a platform-specific risk quantification system. We then perform an intensive structural sensitivity analysis and derive a corresponding platform-specific risk rating model.
Flexible nanopillar-based electrochemical sensors for genetic detection of foodborne pathogens
NASA Astrophysics Data System (ADS)
Park, Yoo Min; Lim, Sun Young; Jeong, Soon Woo; Song, Younseong; Bae, Nam Ho; Hong, Seok Bok; Choi, Bong Gill; Lee, Seok Jae; Lee, Kyoung G.
2018-06-01
Flexible and highly ordered nanopillar arrayed electrodes have brought great interest for many electrochemical applications, especially to the biosensors, because of its unique mechanical and topological properties. Herein, we report an advanced method to fabricate highly ordered nanopillar electrodes produced by soft-/photo-lithography and metal evaporation. The highly ordered nanopillar array exhibited the superior electrochemical and mechanical properties in regard with the wide space to response with electrolytes, enabling the sensitive analysis. As-prepared gold and silver electrodes on nanopillar arrays exhibit great and stable electrochemical performance to detect the amplified gene from foodborne pathogen of Escherichia coli O157:H7. Additionally, lightweight, flexible, and USB-connectable nanopillar-based electrochemical sensor platform improves the connectivity, portability, and sensitivity. Moreover, we successfully confirm the performance of genetic analysis using real food, specially designed intercalator, and amplified gene from foodborne pathogens with high reproducibility (6% standard deviation) and sensitivity (10 × 1.01 CFU) within 25 s based on the square wave voltammetry principle. This study confirmed excellent mechanical and chemical characteristics of nanopillar electrodes have a great and considerable electrochemical activity to apply as genetic biosensor platform in the fields of point-of-care testing (POCT).
Hong, Chien-Chong; Wang, Chih-Ying; Peng, Kuo-Ti; Chu, I-Ming
2011-04-15
This paper presents a microfluidic chip platform with electrochemical carbon nanotube electrodes for preclinical evaluation of antibiotics nanocapsules. Currently, there has been an increasing interest in the development of nanocapsules for drug delivery applications for localized treatments of diseases. So far, the methods to detect antibiotics are liquid chromatography (LC), high performance liquid chromatography (HPLC), mass spectroscopy (MS). These conventional instruments are bulky, expensive, not ease of access, and talented operator required. In order to help the development of nanocapsules and understand drug release profile before planning the clinical experiments, it is important to set up a biosensing platform which could monitor and evaluate the real-time drug release profile of nanocapsules with high sensitivity and long-term measurement ability. In this work, a microfluidic chip platform with electrochemical carbon nanotube electrodes has been developed and characterized for rapid detection of antibiotics teicoplanin nanocapsules. Multi-walled carbon nanotubes are used to modify the gold electrode surfaces to enhance the performance of the electrochemical biosensors. Experimental results show that the limit of detection of the developed platform using carbon nanotubes electrodes is 0.1 μg/ml with a linear range from 1 μg/ml to 10 μg/ml. The sensitivity of the developed system is 0.023 mA ml/μg at 37°C. The drug release profile of teicoplanin nanocapsules in PBS shows that the antibiotics nanocapsules significantly increased the release of drug on the 4th day, measuring 0.4858 μg/(ml hr). The release of drug from the antibiotics nanocapsules reached 34.98 μg/ml on the 7th day. The results showed a similar trend compared with the measurement result using the HPLC instrument. Compared with the traditional HPLC measurements, the electrochemical sensing platform we developed measures results with increased flexibility in controlling experimental factors for long-term preclinical measurement of nanocapsules in real time and at low cost. Copyright © 2011 Elsevier B.V. All rights reserved.
Understanding and Improving High-Performance I/O Subsystems
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James
1996-01-01
This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.
Kreutz, Jason E; Munson, Todd; Huynh, Toan; Shen, Feng; Du, Wenbin; Ismagilov, Rustem F
2011-11-01
This paper presents a protocol using theoretical methods and free software to design and analyze multivolume digital PCR (MV digital PCR) devices; the theory and software are also applicable to design and analysis of dilution series in digital PCR. MV digital PCR minimizes the total number of wells required for "digital" (single molecule) measurements while maintaining high dynamic range and high resolution. In some examples, multivolume designs with fewer than 200 total wells are predicted to provide dynamic range with 5-fold resolution similar to that of single-volume designs requiring 12,000 wells. Mathematical techniques were utilized and expanded to maximize the information obtained from each experiment and to quantify performance of devices and were experimentally validated using the SlipChip platform. MV digital PCR was demonstrated to perform reliably, and results from wells of different volumes agreed with one another. No artifacts due to different surface-to-volume ratios were observed, and single molecule amplification in volumes ranging from 1 to 125 nL was self-consistent. The device presented here was designed to meet the testing requirements for measuring clinically relevant levels of HIV viral load at the point-of-care (in plasma, <500 molecules/mL to >1,000,000 molecules/mL), and the predicted resolution and dynamic range was experimentally validated using a control sequence of DNA. This approach simplifies digital PCR experiments, saves space, and thus enables multiplexing using separate areas for each sample on one chip, and facilitates the development of new high-performance diagnostic tools for resource-limited applications. The theory and software presented here are general and are applicable to designing and analyzing other digital analytical platforms including digital immunoassays and digital bacterial analysis. It is not limited to SlipChip and could also be useful for the design of systems on platforms including valve-based and droplet-based platforms. In a separate publication by Shen et al. (J. Am. Chem. Soc., 2011, DOI: 10.1021/ja2060116), this approach is used to design and test digital RT-PCR devices for quantifying RNA.
Wioland, Liên
2013-10-01
Statistics from the French Employee National Health Insurance Fund indicate high accident levels in the transport sector. This study represents initial thinking on a new approach to transport sector prevention based on the assumption that a work situation could be improved by acting on another interconnected work situation. Ergonomic analysis of two connected work situations, involving the road haulage drivers and cross-docking platform employees, was performed to test this assumption. Our results show that drivers are exposed to a number of identified risks, but their multiple tasks raise the question of activity intensification. The conditions, under which the drivers will perform their work and take to the road, are partly determined by the quality and organisation of the platform with which they interact. We make a number of recommendations (e.g. changing handling equipment, re-appraising certain jobs) to improve platform organisation and employee working conditions with the aim of also improving driver conditions. These initial steps in this prevention approach appear promising, but more detailed investigation is required. Copyright © 2013 Elsevier Ltd. All rights reserved.
Engagement Patterns of High and Low Academic Performers on Facebook Anatomy Pages
Jaffar, Akram Abood; Eladl, Mohamed Ahmed
2016-01-01
Only a few studies have investigated how students use and respond to social networks in the educational context as opposed to social use. In this study, the engagement of medical students on anatomy Facebook pages was evaluated in view of their academic performance. High performers contributed to most of the engagements. They also had a particular preference for higher levels of engagement. Although the students were deeply involved in the educational element of the pages, they continued to appreciate the inherent social element. The profound engagement of the high performers indicated a consistency between Facebook use in the educational context and better student performance. At the same time, the deeper engagement of high performers refutes the opinion that Facebook use is a distractor. Instead, it supports the notion that Facebook could be a suitable platform to engage students in an educational context. PMID:29349324
Engagement Patterns of High and Low Academic Performers on Facebook Anatomy Pages.
Jaffar, Akram Abood; Eladl, Mohamed Ahmed
2016-01-01
Only a few studies have investigated how students use and respond to social networks in the educational context as opposed to social use. In this study, the engagement of medical students on anatomy Facebook pages was evaluated in view of their academic performance. High performers contributed to most of the engagements. They also had a particular preference for higher levels of engagement. Although the students were deeply involved in the educational element of the pages, they continued to appreciate the inherent social element. The profound engagement of the high performers indicated a consistency between Facebook use in the educational context and better student performance. At the same time, the deeper engagement of high performers refutes the opinion that Facebook use is a distractor. Instead, it supports the notion that Facebook could be a suitable platform to engage students in an educational context.
A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission
Parker, Jon; Epstein, Joshua M.
2013-01-01
The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120
Hybrid Integrated Platforms for Silicon Photonics
Liang, Di; Roelkens, Gunther; Baets, Roel; Bowers, John E.
2010-01-01
A review of recent progress in hybrid integrated platforms for silicon photonics is presented. Integration of III-V semiconductors onto silicon-on-insulator substrates based on two different bonding techniques is compared, one comprising only inorganic materials, the other technique using an organic bonding agent. Issues such as bonding process and mechanism, bonding strength, uniformity, wafer surface requirement, and stress distribution are studied in detail. The application in silicon photonics to realize high-performance active and passive photonic devices on low-cost silicon wafers is discussed. Hybrid integration is believed to be a promising technology in a variety of applications of silicon photonics.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Tiong, Ho Yee; Goh, Benjamin Yen Seow; Chiong, Edmund; Tan, Lincoln Guan Lim; Vathsala, Anatharaman
2018-03-31
Robotic-assisted kidney transplantation (RKT) with the Da Vinci (Intuitive, USA) platform has been recently developed to improve outcomes by decreasing surgical site complications and morbidity, especially in obese patients. This potential paradigm shift in the surgical technique of kidney transplantation is performed in only a few centers. For wider adoption of this high stake complex operation, we aimed to develop a procedure-specific simulation platform in a porcine model for the training of robotic intracorporeal vascular anastomosis and evaluating vascular anastomoses patency. This paper describes the requirements and steps developed for the above training purpose. Over a series of four animal ethics' approved experiments, the technique of robotic-assisted laparoscopic autotransplantation of the kidney was developed in Amsterdam live pigs (60-70 kg). The surgery was based around the vascular anastomosis technique described by Menon et al. This non-survival porcine training model is targeted at transplant surgeons with robotic surgery experience. Under general anesthesia, each pig was placed in lateral decubitus position with the placement of one robotic camera port, two robotic 8 mm ports and one assistant port. Robotic docking over the pig posteriorly was performed. The training platform involved the following procedural steps. First, ipsilateral iliac vessel dissection was performed. Second, robotic-assisted laparoscopic donor nephrectomy was performed with in situ perfusion of the kidney with cold Hartmann's solution prior to complete division of the hilar vessels, ureter and kidney mobilization. Thirdly, the kidney was either kept in situ for orthotopic autotransplantation or mobilized to the pelvis and orientated for the vascular anastomosis, which was performed end to end or end to side after vessel loop clamping of the iliac vessels, respectively, using 6/0 Gore-Tex sutures. Following autotransplantation and release of vessel loops, perfusion of the graft was assessed using intraoperative indocyanine green imaging and monitoring urine output after unclamping. This training platform demonstrates adequate face and content validity. With practice, arterial anastomotic time could be improved, showing its construct validity. This porcine training model can be useful in providing training for robotic intracorporeal vascular anastomosis and may facilitate confident translation into a transplant human recipient.
Meyer, Folker; Bagchi, Saurabh; Chaterji, Somali; Gerlach, Wolfgang; Grama, Ananth; Harrison, Travis; Paczian, Tobias; Trimble, William L; Wilke, Andreas
2017-09-26
As technologies change, MG-RAST is adapting. Newly available software is being included to improve accuracy and performance. As a computational service constantly running large volume scientific workflows, MG-RAST is the right location to perform benchmarking and implement algorithmic or platform improvements, in many cases involving trade-offs between specificity, sensitivity and run-time cost. The work in [Glass EM, Dribinsky Y, Yilmaz P, et al. ISME J 2014;8:1-3] is an example; we use existing well-studied data sets as gold standards representing different environments and different technologies to evaluate any changes to the pipeline. Currently, we use well-understood data sets in MG-RAST as platform for benchmarking. The use of artificial data sets for pipeline performance optimization has not added value, as these data sets are not presenting the same challenges as real-world data sets. In addition, the MG-RAST team welcomes suggestions for improvements of the workflow. We are currently working on versions 4.02 and 4.1, both of which contain significant input from the community and our partners that will enable double barcoding, stronger inferences supported by longer-read technologies, and will increase throughput while maintaining sensitivity by using Diamond and SortMeRNA. On the technical platform side, the MG-RAST team intends to support the Common Workflow Language as a standard to specify bioinformatics workflows, both to facilitate development and efficient high-performance implementation of the community's data analysis tasks. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.
Helicopter flight simulation motion platform requirements
NASA Astrophysics Data System (ADS)
Schroeder, Jeffery Allyn
Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Zhang, Xing; Romm, Michelle; Zheng, Xueyun; ...
2016-12-29
Characterization of endogenous metabolites and xenobiotics is essential to deconvoluting the genetic and environmental causes of disease. However, surveillance of chemical exposure and disease-related changes in large cohorts requires an analytical platform that offers rapid measurement, high sensitivity, efficient separation, broad dynamic range, and application to an expansive chemical space. Here in this article, we present a novel platform for small molecule analyses that addresses these requirements by combining solid-phase extraction with ion mobility spectrometry and mass spectrometry (SPE-IMS-MS). This platform is capable of performing both targeted and global measurements of endogenous metabolites and xenobiotics in human biofluids with highmore » reproducibility (CV ≤ 3%), sensitivity (LODs in the pM range in biofluids) and throughput (10-s sample-to-sample duty cycle). We report application of this platform to the analysis of human urine from patients with and without type 1 diabetes, where we observed statistically significant variations in the concentration of disaccharides and previously unreported chemical isomers. Lastly, this SPE-IMS-MS platform overcomes many of the current challenges of large-scale metabolomic and exposomic analyses and offers a viable option for population and patient cohort screening in an effort to gain insights into disease processes and human environmental chemical exposure.« less
e-Collaboration for Earth observation (E-CEO): the Cloud4SAR interferometry data challenge
NASA Astrophysics Data System (ADS)
Casu, Francesco; Manunta, Michele; Boissier, Enguerran; Brito, Fabrice; Aas, Christina; Lavender, Samantha; Ribeiro, Rita; Farres, Jordi
2014-05-01
The e-Collaboration for Earth Observation (E-CEO) project addresses the technologies and architectures needed to provide a collaborative research Platform for automating data mining and processing, and information extraction experiments. The Platform serves for the implementation of Data Challenge Contests focusing on Information Extraction for Earth Observations (EO) applications. The possibility to implement multiple processors within a Common Software Environment facilitates the validation, evaluation and transparent peer comparison among different methodologies, which is one of the main requirements rose by scientists who develop algorithms in the EO field. In this scenario, we set up a Data Challenge, referred to as Cloud4SAR (http://wiki.services.eoportal.org/tiki-index.php?page=ECEO), to foster the deployment of Interferometric SAR (InSAR) processing chains within a Cloud Computing platform. While a large variety of InSAR processing software tools are available, they require a high level of expertise and a complex user interaction to be effectively run. Computing a co-seismic interferogram or a 20-years deformation time series on a volcanic area are not easy tasks to be performed in a fully unsupervised way and/or in very short time (hours or less). Benefiting from ESA's E-CEO platform, participants can optimise algorithms on a Virtual Sandbox environment without being expert programmers, and compute results on high performing Cloud platforms. Cloud4SAR requires solving a relatively easy InSAR problem by trying to maximize the exploitation of the processing capabilities provided by a Cloud Computing infrastructure. The proposed challenge offers two different frameworks, each dedicated to participants with different skills, identified as Beginners and Experts. For both of them, the contest mainly resides in the degree of automation of the deployed algorithms, no matter which one is used, as well as in the capability of taking effective benefit from a parallel computing environment.
NASA Astrophysics Data System (ADS)
Shukitt-Hale, Barbara; Miller, Marshall; Carrihill-Knoll, Kirsty; Rabin, Bernard; Joseph, James
Previous research has shown that radiation exposure, particularly to particles of high energy and charge (HZE particles) which will be encountered on long-term space missions, can adversely affect the ability of rats to perform a variety of behavioral tasks. This outcome has implications for an astronaut's ability to successfully complete requirements associated with these missions. Both aged and irradiated rats display cognitive impairment in tests of spatial learning and memory such as the Morris water maze and the radial arm maze. Therefore, in the present study, we used a combination of these two tests, the 8 arm radial water maze (RAWM), to measure spatial learning in rats which were irradiated at the NSRL with 0, 150cGy, or 200cGy of 56Fe radiation. Following irradiation the rats were shipped to the HNRCA and tested in the RAWM (2-3 months later) for 5 days, 3 trials/day. In this version of the RAWM, there were 4 hidden platforms that the rat needed to locate to successfully solve a trial. Once the rat located a platform, it was allowed to remain there for 15 sec before the platform sank, at which point the rat tried to locate the remaining ones. Reference (entering an arm that never contained the platform) and working (re-entering an arm in which the platform had already been found) memory errors were tabulated. Results showed that the irradiated rats had more reference and working memory errors while learning the maze, particularly on Day 3 of testing. Additionally, they utilized non-spatial strategies to solve the RAWM task whereas the control animals used spatial strategies. These results show that irradiation with 56Fe high-energy particles produces age-like decrements in cognitive behavior that may impair the ability of astronauts to perform critical tasks during long-term space travel beyond the magnetosphere. Supported by USDA Intramural and N.A.S.A. Grant NNX08AM66G
Ghofrani, Mohiedean; Zhao, Chengquan; Davey, Diane D; Fan, Fang; Husain, Mujtaba; Laser, Alice; Ocal, Idris T; Shen, Rulong Z; Goodrich, Kelly; Souers, Rhona J; Crothers, Barbara A
2016-12-01
- Since 2008, the College of American Pathologists has provided the human papillomavirus for cytology laboratories (CHPV) proficiency testing program to help laboratories meet the requirements of the Clinical Laboratory Improvement Amendments of 1988. - To provide an update on trends in proficiency testing performance in the College of American Pathologists CHPV program during the 4-year period from 2011 through 2014 and to compare those trends with the preceding first 3 years of the program. - Responses of laboratories participating in the CHPV program from 2011 through 2014 were analyzed using a nonlinear mixed model to compare different combinations of testing medium and platform. - In total, 818 laboratories participated in the CHPV program at least once during the 4 years, with participation increasing during the study period. Concordance of participant responses with the target result was more than 98% (38 280 of 38 892). Overall performance with all 3 testing media-ThinPrep (Hologic, Bedford, Massachusetts), SurePath (Becton, Dickinson and Company, Franklin Lakes, New Jersey), or Digene (Qiagen, Valencia, California)-was equivalent (P = .51), and all 4 US Food and Drug Administration (FDA)-approved platforms-Hybrid Capture 2 (Qiagen), Cervista (Hologic), Aptima (Hologic), and cobas (Roche Molecular Systems, Pleasanton, California)-outperformed laboratory-developed tests, unspecified commercial kits, and other (noncommercial) methods in ThinPrep medium (P < .001). However, certain off-label combinations of platform and medium, most notably Cervista with SurePath, demonstrated suboptimal performance (P < .001). - Laboratories demonstrated proficiency in using various combinations of testing media and platforms offered in the CHPV program, with statistically significant performance differences in certain combinations. These observations may be relevant in the current discussions about FDA oversight of laboratory-developed tests.
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Photonic crystal fiber technology for compact fiber-delivered high-power ultrafast fiber lasers
NASA Astrophysics Data System (ADS)
Triches, Marco; Michieletto, Mattia; Johansen, Mette M.; Jakobsen, Christian; Olesen, Anders S.; Papior, Sidsel R.; Kristensen, Torben; Bondue, Magalie; Weirich, Johannes; Alkeskjold, Thomas T.
2018-02-01
Photonic crystal fiber (PCF) technology has radically impacted the scientific and industrial ultrafast laser market. Reducing platform dimensions are important to decrease cost and footprint while maintaining high optical efficiency. We present our recent work on short 85 μm core ROD-type fiber amplifiers that maintain single-mode performance and excellent beam quality. Robust long-term performance at 100 W average power and 250 kW peak power in 20 ps pulses at 1030 nm wavelength is presented, exceeding 500 h with stable performance in terms of both polarization and power. In addition, we present our recent results on hollow-core ultrafast fiber delivery maintaining high beam quality and polarization purity.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senthilkumar, S
2014-06-01
Purpose: The main purpose of this work was to develop an in-house low cost respiratory motion phantom platform for testing the accuracy of the gated radiotherapy system and analyze the dosimetric difference during gated radiotherapy. Methods: An in-house respiratory motion platform(RMP) was designed and constructed for testing the targeting accuracy of respiratory tracking system. The RMP consist of acrylic Chest Wall Platform, 2 DC motors, 4 IR sensors, speed controller circuit, 2 LED and 2 moving rods inside the RMP. The velocity of the movement can be varied from 0 to 30 cycles per minute. The platform mounted to amore » base using precision linear bearings. The base and platform are made of clear, 15mm thick polycarbonate plastic and the linear ball bearings are oriented to restrict the platform to a movement of approximately 50mm up and down with very little friction. Results: The targeting accuracy of the respiratory tracking system was evaluated using phantom with and without respiratory movement with varied amplitude. We have found the 5% dose difference to the PTV during the movement in comparison with stable PTV. The RMP can perform sinusoidal motion in 1D with fixed peak to peak motion of 5 to 50mm and cycle interval from 2 to 6 seconds. The RMP was designed to be able to simulate the gross anatomical anterior posterior motion attributable to respiration-induced motion of the thoracic region. Conclusion: The unique RMP simulates breathing providing the means to create a comprehensive program for commissioning, training, quality assurance and dose verification of gated radiotherapy treatments. Create the anterior/posterior movement of a target over a 5 to 50 mm distance to replicate tumor movement. The targeting error of the respiratory tracking system is less than 1.0 mm which shows suitable for clinical treatment with highly performance.« less
Song, Jiao; Liu, Xuejun; Wu, Jiejun; Meehan, Michael J; Blevitt, Jonathan M; Dorrestein, Pieter C; Milla, Marcos E
2013-02-15
We have developed an ultra-performance liquid chromatography-multiple reaction monitoring/mass spectrometry (UPLC-MRM/MS)-based, high-content, high-throughput platform that enables simultaneous profiling of multiple lipids produced ex vivo in human whole blood (HWB) on treatment with calcium ionophore and its modulation with pharmacological agents. HWB samples were processed in a 96-well plate format compatible with high-throughput sample processing instrumentation. We employed a scheduled MRM (sMRM) method, with a triple-quadrupole mass spectrometer coupled to a UPLC system, to measure absolute amounts of 122 distinct eicosanoids using deuterated internal standards. In a 6.5-min run, we resolved and detected with high sensitivity (lower limit of quantification in the range of 0.4-460 pg) all targeted analytes from a very small HWB sample (2.5 μl). Approximately 90% of the analytes exhibited a dynamic range exceeding 1000. We also developed a tailored software package that dramatically sped up the overall data quantification and analysis process with superior consistency and accuracy. Matrix effects from HWB and precision of the calibration curve were evaluated using this newly developed automation tool. This platform was successfully applied to the global quantification of changes on all 122 eicosanoids in HWB samples from healthy donors in response to calcium ionophore stimulation. Copyright © 2012 Elsevier Inc. All rights reserved.
2016-11-10
A heavy-lift crane lifts the second half of the C-level work platforms, C north, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle of the Vehicle Assembly Building (VAB) at NASA's Kennedy Space Center in Florida. The C platform will be moved into High Bay 3 for installation on the north side of High Bay 3. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
Music-Games: A Case Study of Their Impact
ERIC Educational Resources Information Center
Cassidy, Gianna G.; Paisley, Anna M. J. M.
2013-01-01
Music-games present a highly pervasive new platform to create, perform, appreciate and transmit music through peer and online communities (e.g., Peppler, Downton, Lindsay, & Hay, 2011). While learners are increasingly engaged with such digital music participation outside the classroom, evidence indicates learners are increasingly disengaged…
SCEC Earthquake System Science Using High Performance Computing
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.
2008-12-01
The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes were run on NSF TeraGrid sites including simulations that use the full PSC Big Ben supercomputer (4096 cores) and simulations that ran on more than 10K cores at TACC Ranger. The SCEC/CME group used scientific workflow tools and grid-computing to run more than 1.5 million jobs at NCSA for the CyberShake project. Visualizations produced by a SCEC/CME researcher of the 10Hz ShakeOut 1.2 scenario simulation data were used by USGS in ShakeOut publications and public outreach efforts. OpenSHA was ported onto an NSF supercomputer and was used to produce very high resolution hazard PSHA maps that contained more than 1.6 million hazard curves.
2016-11-10
A heavy-lift crane lowers the second half of the C-level work platforms, C north, for NASA’s Space Launch System (SLS) rocket, into High Bay 3 of the Vehicle Assembly Building (VAB) at NASA's Kennedy Space Center in Florida. The C platform will be installed on the north side of High Bay 3. In view below are several of the previously installed levels of platforms. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
The application of a Web-geographic information system for improving urban water cycle modelling.
Mair, M; Mikovits, C; Sengthaler, M; Schöpf, M; Kinzel, H; Urich, C; Kleidorfer, M; Sitzenfrei, R; Rauch, W
2014-01-01
Research in urban water management has experienced a transition from traditional model applications to modelling water cycles as an integrated part of urban areas. This includes the interlinking of models of many research areas (e.g. urban development, socio-economy, urban water management). The integration and simulation is realized in newly developed frameworks (e.g. DynaMind and OpenMI) and often assumes a high knowledge in programming. This work presents a Web based urban water management modelling platform which simplifies the setup and usage of complex integrated models. The platform is demonstrated with a small application example on a case study within the Alpine region. The used model is a DynaMind model benchmarking the impact of newly connected catchments on the flooding behaviour of an existing combined sewer system. As a result the workflow of the user within a Web browser is demonstrated and benchmark results are shown. The presented platform hides implementation specific aspects behind Web services based technologies such that the user can focus on his main aim, which is urban water management modelling and benchmarking. Moreover, this platform offers a centralized data management, automatic software updates and access to high performance computers accessible with desktop computers and mobile devices.
Leng, Yumin; Qian, Sihua; Wang, Yuhui; Lu, Cheng; Ji, Xiaoxu; Lu, Zhiwen; Lin, Hengwei
2016-01-01
Multidimensional sensing offers advantages in accuracy, diversity and capability for the simultaneous detection and discrimination of multiple analytes, however, the previous reports usually require complicated synthesis/fabrication process and/or need a variety of techniques (or instruments) to acquire signals. Therefore, to take full advantages of this concept, simple designs are highly desirable. Herein, a novel concept is conceived to construct multidimensional sensing platforms based on a single indicator that has capability of showing diverse color/fluorescence responses with the addition of different analytes. Through extracting hidden information from these responses, such as red, green and blue (RGB) alterations, a triple-channel-based multidimensional sensing platform could consequently be fabricated, and the RGB alterations are further applicable to standard statistical methods. As a proof-of-concept study, a triple-channel sensing platform is fabricated solely using dithizone with assistance of cetyltrimethylammonium bromide (CTAB) for hyperchromicity and sensitization, which demonstrates superior capabilities in detection and identification of ten common heavy metal ions at their standard concentrations of wastewater-discharge of China. Moreover, this sensing platform exhibits promising applications in semi-quantitative and even quantitative analysis individuals of these heavy metal ions with high sensitivity as well. Finally, density functional theory calculations are performed to reveal the foundations for this analysis. PMID:27146105
An organophosphonate strategy for functionalizing silicon photonic biosensors
Shang, Jing; Cheng, Fang; Dubey, Manish; Kaplan, Justin M.; Rawal, Meghana; Jiang, Xi; Newburg, David S.; Sullivan, Philip A.; Andrade, Rodrigo B.; Ratner, Daniel M.
2012-01-01
Silicon photonic microring resonators have established their potential for label-free and low-cost biosensing applications. However, the long-term performance of this optical sensing platform requires robust surface modification and biofunctionalization. Herein, we demonstrate a conjugation strategy based on an organophosphonate surface coating and vinyl sulfone linker to biofunctionalize silicon resonators for biomolecular sensing. To validate this method, a series of glycans, including carbohydrates and glycoconjugates, were immobilized on divinyl sulfone (DVS)/organophosphonate-modified microrings and used to characterize carbohydrate-protein and norovirus particle interactions. This biofunctional platform was able to orthogonally detect multiple specific carbohydrate-protein interactions simultaneously. Additionally, the platform was capable of reproducible binding after multiple regenerations by high-salt, high-pH or low-pH solutions and after 1-month storage in ambient conditions. This remarkable stability and durability of the organophosphonate immobilization strategy will facilitate the application of silicon microring resonators in various sensing conditions, prolong their lifetime, and minimize the cost for storage and delivery; these characteristics are requisite for developing biosensors for point-of-care and distributed diagnostics and other biomedical applications. In addition, the platform demonstrated its ability to characterize carbohydrate-mediated host-virus interactions, providing a facile method for discovering new anti-viral agents to prevent infectious disease. PMID:22220731
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitarka, A.
In this project we developed GEN_SRF4 a computer program for generating kinematic rupture models, compatible with the SRF format, using Irikura and Miyake (2011) asperity-based earthquake rupture model (IM2011, hereafter). IM2011, also known as Irkura’s recipe, has been widely used to model and simulate ground motion from earthquakes in Japan. An essential part of the method is its kinematic rupture generation technique, which is based on a deterministic rupture asperity modeling approach. The source model simplicity and efficiency of IM2011 at reproducing ground motion from earthquakes recorded in Japan makes it attractive to developers and users of the Southern Californiamore » Earthquake Center Broadband Platform (SCEC BB platform). Besides writing the code the objective of our study was to test the transportability of IM2011 to broadband simulation methods used by the SCEC BB platform. Here we test it using the Graves and Pitarka (2010) method, implemented in the platform. We performed broadband (0.1- -10 Hz) ground motion simulations for a M6.7 scenario earthquake using rupture models produced with both GEN_SRF4 and rupture generator of Graves and Pitarka (2016), (GP2016 hereafter). In the simulations we used the same Green’s functions, and same high frequency approach for calculating the low-frequency and high-frequency parts of ground motion, respectively.« less
3D hybrid integrated lasers for silicon photonics
NASA Astrophysics Data System (ADS)
Song, B.; Pinna, S.; Liu, Y.; Megalini, L.; Klamkin, J.
2018-02-01
A novel 3D hybrid integration platform combines group III-V materials and silicon photonics to yield high-performance lasers is presented. This platform is based on flip-chip bonding and vertical optical coupling integration. In this work, indium phosphide (InP) devices with monolithic vertical total internal reflection turning mirrors were bonded to active silicon photonic circuits containing vertical grating couplers. Greater than 2 mW of optical power was coupled into a silicon waveguide from an InP laser. The InP devices can also be bonded directly to the silicon substrate, providing an efficient path for heat dissipation owing to the higher thermal conductance of silicon compared to InP. Lasers realized with this technique demonstrated a thermal impedance as low as 6.2°C/W, allowing for high efficiency and operation at high temperature. InP reflective semiconductor optical amplifiers were also integrated with 3D hybrid integration to form integrated external cavity lasers. These lasers demonstrated a wavelength tuning range of 30 nm, relative intensity noise lower than -135 dB/Hz and laser linewidth of 1.5 MHz. This platform is promising for integration of InP lasers and photonic integrated circuits on silicon photonics.
Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix
NASA Astrophysics Data System (ADS)
Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian
2015-07-01
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
Assembly Platform For Use In Outer Space
NASA Technical Reports Server (NTRS)
Rao, Niranjan S.; Buddington, Patricia A.
1995-01-01
Report describes conceptual platform or framework for use in assembling other structures and spacecraft in outer space. Consists of three fixed structural beams comprising central beam and two cross beams. Robotic manipulators spaced apart on platform to provide telerobotic operation of platform by either space-station or ground crews. Platform and attached vehicles function synergistically to achieve maximum performance for intended purposes.
VAB Platform K(2) Lift & Install into Highbay 3
2016-03-07
A 250-ton crane is used to lift the second half of the K-level work platforms for NASA’s Space Launch System (SLS) rocket high above the transfer aisle inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. The platform is being lifted up for transfer into High Bay 3 for installation. The platform will be secured about 86 feet above the VAB floor, on tower E of the high bay. The K work platforms will provide access to the SLS core stage and solid rocket boosters during processing and stacking operations on the mobile launcher. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to High Bay 3 to support processing of the SLS and Orion spacecraft. A total of 10 levels of new platforms, 20 platform halves altogether, will surround the SLS rocket and Orion spacecraft.
Towards an Open, Distributed Software Architecture for UxS Operations
NASA Technical Reports Server (NTRS)
Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette
2015-01-01
To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.
A remark on copy number variation detection methods.
Li, Shuo; Dou, Xialiang; Gao, Ruiqi; Ge, Xinzhou; Qian, Minping; Wan, Lin
2018-01-01
Copy number variations (CNVs) are gain and loss of DNA sequence of a genome. High throughput platforms such as microarrays and next generation sequencing technologies (NGS) have been applied for genome wide copy number losses. Although progress has been made in both approaches, the accuracy and consistency of CNV calling from the two platforms remain in dispute. In this study, we perform a deep analysis on copy number losses on 254 human DNA samples, which have both SNP microarray data and NGS data publicly available from Hapmap Project and 1000 Genomes Project respectively. We show that the copy number losses reported from Hapmap Project and 1000 Genome Project only have < 30% overlap, while these reports are required to have cross-platform (e.g. PCR, microarray and high-throughput sequencing) experimental supporting by their corresponding projects, even though state-of-art calling methods were employed. On the other hand, copy number losses are found directly from HapMap microarray data by an accurate algorithm, i.e. CNVhac, almost all of which have lower read mapping depth in NGS data; furthermore, 88% of which can be supported by the sequences with breakpoint in NGS data. Our results suggest the ability of microarray calling CNVs and the possible introduction of false negatives from the unessential requirement of the additional cross-platform supporting. The inconsistency of CNV reports from Hapmap Project and 1000 Genomes Project might result from the inadequate information containing in microarray data, the inconsistent detection criteria, or the filtration effect of cross-platform supporting. The statistical test on CNVs called from CNVhac show that the microarray data can offer reliable CNV reports, and majority of CNV candidates can be confirmed by raw sequences. Therefore, the CNV candidates given by a good caller could be highly reliable without cross-platform supporting, so additional experimental information should be applied in need instead of necessarily.
Wilkinson, Samuel L.; John, Shibu; Walsh, Roddy; Novotny, Tomas; Valaskova, Iveta; Gupta, Manu; Game, Laurence; Barton, Paul J R.; Cook, Stuart A.; Ware, James S.
2013-01-01
Background Molecular genetic testing is recommended for diagnosis of inherited cardiac disease, to guide prognosis and treatment, but access is often limited by cost and availability. Recently introduced high-throughput bench-top DNA sequencing platforms have the potential to overcome these limitations. Methodology/Principal Findings We evaluated two next-generation sequencing (NGS) platforms for molecular diagnostics. The protein-coding regions of six genes associated with inherited arrhythmia syndromes were amplified from 15 human samples using parallelised multiplex PCR (Access Array, Fluidigm), and sequenced on the MiSeq (Illumina) and Ion Torrent PGM (Life Technologies). Overall, 97.9% of the target was sequenced adequately for variant calling on the MiSeq, and 96.8% on the Ion Torrent PGM. Regions missed tended to be of high GC-content, and most were problematic for both platforms. Variant calling was assessed using 107 variants detected using Sanger sequencing: within adequately sequenced regions, variant calling on both platforms was highly accurate (Sensitivity: MiSeq 100%, PGM 99.1%. Positive predictive value: MiSeq 95.9%, PGM 95.5%). At the time of the study the Ion Torrent PGM had a lower capital cost and individual runs were cheaper and faster. The MiSeq had a higher capacity (requiring fewer runs), with reduced hands-on time and simpler laboratory workflows. Both provide significant cost and time savings over conventional methods, even allowing for adjunct Sanger sequencing to validate findings and sequence exons missed by NGS. Conclusions/Significance MiSeq and Ion Torrent PGM both provide accurate variant detection as part of a PCR-based molecular diagnostic workflow, and provide alternative platforms for molecular diagnosis of inherited cardiac conditions. Though there were performance differences at this throughput, platforms differed primarily in terms of cost, scalability, protocol stability and ease of use. Compared with current molecular genetic diagnostic tests for inherited cardiac arrhythmias, these NGS approaches are faster, less expensive, and yet more comprehensive. PMID:23861798
Richardson, R. Mark; Kells, Adrian P.; Martin, Alastair J.; Larson, Paul S.; Starr, Philip A.; Piferi, Peter G.; Bates, Geoffrey; Tansey, Lisa; Rosenbluth, Kathryn H.; Bringas, John R.; Berger, Mitchel S.; Bankiewicz, Krystof S.
2011-01-01
Background/Aims A skull-mounted aiming device and integrated software platform has been developed for MRI-guided neurological interventions. In anticipation of upcoming gene therapy clinical trials, we adapted this device for real-time convection-enhanced delivery of therapeutics via a custom-designed infusion cannula. The targeting accuracy of this delivery system and the performance of the infusion cannula were validated in nonhuman primates. Methods Infusions of gadoteridol were delivered to multiple brain targets and the targeting error was determined for each cannula placement. Cannula performance was assessed by analyzing gadoteridol distributions and by histological analysis of tissue damage. Results The average targeting error for all targets (n = 11) was 0.8 mm (95% CI = 0.14). For clinically relevant volumes, the distribution volume of gadoteridol increased as a linear function (R2 = 0.97) of the infusion volume (average slope = 3.30, 95% CI = 0.2). No infusions in any target produced occlusion, cannula reflux or leakage from adjacent tracts, and no signs of unexpected tissue damage were observed. Conclusions This integrated delivery platform allows real-time convection-enhanced delivery to be performed with a high level of precision, predictability and safety. This approach may improve the success rate for clinical trials involving intracerebral drug delivery by direct infusion. PMID:21494065
SwaMURAy - Swapping Memory Unit for Radio Astronomy
NASA Astrophysics Data System (ADS)
Winberg, Simon
2016-03-01
This paper concerns design and performance testing of an HDL module called SwaMURAy that is a configurable, high-speed data sequencing and flow control module serving as an intermediary between data acquisition and subsequent processing stages. While a FIFO suffices for many applications, our case needed a more elaborate solution to overcome legacy design limitations. The SwaMURAy is designed around a system where a block of sampled data is acquired at a fast rate and is then distributed among multiple processing paths to achieve a desired overall processing rate. This architecture provides an effective design pattern around which various software defined radio (SDR) and radio astronomy applications can be built. This solution was partly in response to legacy design restrictions of the SDR platform we used, a difficulty likely experienced by many developers whereby new sampling peripherals are inhibited by legacy characteristics of an underlying reconfigurable platform. Our SDR platform had a planned lifetime of at least five years as a complete redesign and refabrication would be too costly. While the SwaMURAy overcame some performance problems, other problems arose. This paper overviews the SwaMURAy design, performance improvements achieved in an SDR case study, and discusses remaining limitations and workarounds we expect will achieve further improvements.
Observing the ocean with different platforms/methods. Advantages, disadvantages and lessons learnt
NASA Astrophysics Data System (ADS)
Petihakis, George; Potiris, Manolis; Ntoumas, Manolis; Frangoulis, Kostas; Tsiaras, Kostas; Triantafyllou, George; Pollani, Annika
2015-04-01
Methods for observing/measuring the ocean, present remarkable diversity. In situ sampling or remote sensing, automated or not measurements with sensing probes, utilize different measuring principles, sample different parts of the system, are characterized by different accuracy/precision and sample over a large range of spatial and temporal scales with variable resolution. Measurements, quite often are dependent on the platform design and the platform interaction with the highly variable ambient environment. To add to the aforementioned issues that render the combination of data from different sources challenging from a scientific perspective, there are also a number of technical and data issues. These are important for the good operational status of the platforms, the smooth data flow and the collection of appropriate meta-data. Finally the raw data files need to be processed into a user friendly output format so the operator will be able to identify as early as possible sensor drift and failures. In this work, data from different observation platforms/sensors is analysed and compared, while mechanisms and processes responsible for differences are identified. More detailed, temperature, salinity and chlorophyll data from four fixed observing stations, one Ferry Box, satellites and a monthly in situ sampling program, is used. Main results indicate that a) regular calibration according to expected parameter range and well-defined, consistent deployment plan of proven sensors is sufficient for acquiring high quality data in the long term. Better knowledge of site specific response of new instrumentation is required for producing consistent long term data b) duplicate sensors on one platform considerably improve data flow and data quality c) if an area is sampled by multiple platforms, then platform dependent errors can be quantified d) fixed point observatories are efficient tools for assessing regional performance of satellite products. Higher vertical and temporal sampling rate of the upper 20m of the water column increase inter-comparability between the two platforms e) delayed mode, lower processing level data/meta-data should be archived and disseminated in addition to standard formatted files due to analysis artifacts and loss of information during transmission and processing.
NASA Astrophysics Data System (ADS)
Leakeas, Charles L.; Capehart, Shay R.; Bartell, Richard J.; Cusumano, Salvatore J.; Whiteley, Matthew R.
2011-06-01
Laser weapon systems comprised of tiled subapertures are rapidly emerging in importance in the directed energy community. Performance models of these laser weapon systems have been developed from numerical simulations of a high fidelity wave-optics code called WaveTrain which is developed by MZA Associates. System characteristics such as mutual coherence, differential jitter, and beam quality rms wavefront error are defined for a focused beam on the target. Engagement scenarios are defined for various platform and target altitudes, speeds, headings, and slant ranges along with the natural wind speed and heading. Inputs to the performance model include platform and target height and velocities, Fried coherence length, Rytov number, isoplanatic angle, thermal blooming distortion number, Greenwood and Tyler frequencies, and atmospheric transmission. The performance model fit is based on power-in-the-bucket (PIB) values against the PIB from the simulation results for the vacuum diffraction-limited spot size as the bucket. The goal is to develop robust performance models for aperture phase error, turbulence, and thermal blooming effects in tiled subaperture systems.
Designing algorithm visualization on mobile platform: The proposed guidelines
NASA Astrophysics Data System (ADS)
Supli, A. A.; Shiratuddin, N.
2017-09-01
This paper entails an ongoing study about the design guidelines of algorithm visualization (AV) on mobile platform, helping students learning data structures and algorithm (DSA) subject effectively. Our previous review indicated that design guidelines of AV on mobile platform are still few. Mostly, previous guidelines of AV are developed for AV on desktop and website platform. In fact, mobile learning has been proved to enhance engagement in learning circumstances, and thus effect student's performance. In addition, the researchers highly recommend including UI design and Interactivity in designing effective AV system. However, the discussions of these two aspects in previous AV design guidelines are not comprehensive. The UI design in this paper describes the arrangement of AV features in mobile environment, whereas interactivity is about the active learning strategy features based on learning experiences (how to engage learners). Thus, this study main objective is to propose design guidelines of AV on mobile platform (AVOMP) that entails comprehensively UI design and interactivity aspects. These guidelines are developed through content analysis and comparative analysis from various related studies. These guidelines are useful for AV designers to help them constructing AVOMP for various topics on DSA.
Development of a novel automated cell isolation, expansion, and characterization platform.
Franscini, Nicola; Wuertz, Karin; Patocchi-Tenzer, Isabel; Durner, Roland; Boos, Norbert; Graf-Hausner, Ursula
2011-06-01
Implementation of regenerative medicine in the clinical setting requires not only biological inventions, but also the development of reproducible and safe method for cell isolation and expansion. As the currently used manual techniques do not fulfill these requirements, there is a clear need to develop an adequate robotic platform for automated, large-scale production of cells or cell-based products. Here, we demonstrate an automated liquid-handling cell-culture platform that can be used to isolate, expand, and characterize human primary cells (e.g., from intervertebral disc tissue) with results that are comparable to the manual procedure. Specifically, no differences could be observed for cell yield, viability, aggregation rate, growth rate, and phenotype. Importantly, all steps-from the enzymatic isolation of cells through the biopsy to the final quality control-can be performed completely by the automated system because of novel tools that were incorporated into the platform. This automated cell-culture platform can therefore replace entirely manual processes in areas that require high throughput while maintaining stability and safety, such as clinical or industrial settings. Copyright © 2011 Society for Laboratory Automation and Screening. Published by Elsevier Inc. All rights reserved.
Broadband set-top box using MAP-CA processor
NASA Astrophysics Data System (ADS)
Bush, John E.; Lee, Woobin; Basoglu, Chris
2001-12-01
Advances in broadband access are expected to exert a profound impact in our everyday life. It will be the key to the digital convergence of communication, computer and consumer equipment. A common thread that facilitates this convergence comprises digital media and Internet. To address this market, Equator Technologies, Inc., is developing the Dolphin broadband set-top box reference platform using its MAP-CA Broadband Signal ProcessorT chip. The Dolphin reference platform is a universal media platform for display and presentation of digital contents on end-user entertainment systems. The objective of the Dolphin reference platform is to provide a complete set-top box system based on the MAP-CA processor. It includes all the necessary hardware and software components for the emerging broadcast and the broadband digital media market based on IP protocols. Such reference design requires a broadband Internet access and high-performance digital signal processing. By using the MAP-CA processor, the Dolphin reference platform is completely programmable, allowing various codecs to be implemented in software, such as MPEG-2, MPEG-4, H.263 and proprietary codecs. The software implementation also enables field upgrades to keep pace with evolving technology and industry demands.
The Effects of Filter Cutoff Frequency on Musculoskeletal Simulations of High-Impact Movements.
Tomescu, Sebastian; Bakker, Ryan; Beach, Tyson A C; Chandrashekar, Naveen
2018-02-12
Estimation of muscle forces through musculoskeletal simulation is important in understanding human movement and injury. Unmatched filter frequencies used to low-pass filter marker and force platform data can create artifacts during inverse dynamics analysis, but their effects on muscle force calculations are unknown. The objective of this study was to determine the effects of filter cutoff frequency on simulation parameters and magnitudes of lower extremity muscle and resultant joint contact forces during a high-impact maneuver. Eight participants performed a single leg jump-landing. Kinematics were captured with a 3D motion capture system and ground reaction forces were recorded with a force platform. The marker and force platform data were filtered using two matched filter frequencies (10-10Hz, 15-15Hz) and two unmatched frequencies (10-50Hz, 15-50Hz). Musculoskeletal simulations using Computed Muscle Control were performed in OpenSim. The results revealed significantly higher peak quadriceps (13%), hamstrings (48%), and gastrocnemius forces (69%) in the unmatched (10-50Hz, 15-50Hz) conditions than in the matched (10-10Hz, 15-15Hz) conditions (p<0.05). Resultant joint contact forces and reserve (non-physiologic) moments were similarly larger in the unmatched filter categories (p<0.05). This study demonstrated that artifacts created from filtering with unmatched filter cutoffs result in altered muscle forces and dynamics which are not physiologic.
NASA Astrophysics Data System (ADS)
Hofzumahaus, Andreas; Holland, Frank; Oebel, Andreas; Rohrer, Franz; Mentel, Thomas; Kiendler-Scharr, Astrid; Wahner, Andreas; Brauchle, Artur; Steinlein, Klaus; Gritzbach, Robert
2014-05-01
The planetary boundary layer (PBL) is the chemically most active and complex part of the atmosphere where freshly emitted reactive trace gases, tropospheric radicals, atmospheric oxidation products and aerosols exhibit a large variability and spatial gradients. In order to investigate the chemical degradation of trace gases and the formation of secondary pollutants in the PBL, a commercial Zeppelin NT was modified to be used as an airborne measurement platform for chemical and physical observations with high spatial resolution. The Zeppelin NT was developed by Zeppelin Luftschifftechnik (ZLT) and is operated by Deutsche Zeppelin Reederei (DZR) in Friedrichshafen, Germany. The modification was performed in cooperation between Forschungszentrum Jülich and ZLT. The airship has a length of 75 m, can lift about 1 ton of scientific payload and can be manoeuvered with high precision by propeller engines. The modified Zeppelin can carry measurement instruments mounted on a platform on top of the Zeppelin, or inside the gondola beneath the airship. Three different instrument packages were developed to investigate a. gas-phase oxidation processes involving free radicals (OH, HO2) b. formation of secondary organic aerosols (SOA) c. new particle formation (nucleation) The presentation will describe the modified airship and provide an overview of its technical performance. Examples of its application during the recent PEGASOS flight campaigns in Europe will be given.
2013-10-21
Platform for Testing a Space Robotic System to Perform Contact Tasks in Zero- Gravity Environment 5a. CONTRACT NUMBER FA9453-11-1-0306 5b...SUBJECT TERMS Microgravity, zero gravity , test platform, simulation, gravity offloading 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...4 3.3 Principle of Gravity Offloading
Parallel Climate Data Assimilation PSAS Package
NASA Technical Reports Server (NTRS)
Ding, Hong Q.; Chan, Clara; Gennery, Donald B.; Ferraro, Robert D.
1996-01-01
We have designed and implemented a set of highly efficient and highly scalable algorithms for an unstructured computational package, the PSAS data assimilation package, as demonstrated by detailed performance analysis of systematic runs on up to 512node Intel Paragon. The equation solver achieves a sustained 18 Gflops performance. As the results, we achieved an unprecedented 100-fold solution time reduction on the Intel Paragon parallel platform over the Cray C90. This not only meets and exceeds the DAO time requirements, but also significantly enlarges the window of exploration in climate data assimilations.
Advances in DNA sequencing technologies for high resolution HLA typing.
Cereb, Nezih; Kim, Hwa Ran; Ryu, Jaejun; Yang, Soo Young
2015-12-01
This communication describes our experience in large-scale G group-level high resolution HLA typing using three different DNA sequencing platforms - ABI 3730 xl, Illumina MiSeq and PacBio RS II. Recent advances in DNA sequencing technologies, so-called next generation sequencing (NGS), have brought breakthroughs in deciphering the genetic information in all living species at a large scale and at an affordable level. The NGS DNA indexing system allows sequencing multiple genes for large number of individuals in a single run. Our laboratory has adopted and used these technologies for HLA molecular testing services. We found that each sequencing technology has its own strengths and weaknesses, and their sequencing performances complement each other. HLA genes are highly complex and genotyping them is quite challenging. Using these three sequencing platforms, we were able to meet all requirements for G group-level high resolution and high volume HLA typing. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
A bunch to bucket phase detector for the RHIC LLRF upgrade platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.S.; Harvey, M.; Hayes, T.
2011-03-28
As part of the overall development effort for the RHIC LLRF Upgrade Platform [1,2,3], a generic four channel 16 bit Analog-to-Digital Converter (ADC) daughter module was developed to provide high speed, wide dynamic range digitizing and processing of signals from DC to several hundred megahertz. The first operational use of this card was to implement the bunch to bucket phase detector for the RHIC LLRF beam control feedback loops. This paper will describe the design and performance features of this daughter module as a bunch to bucket phase detector, and also provide an overview of its place within the overallmore » LLRF platform architecture as a high performance digitizer and signal processing module suitable to a variety of applications. In modern digital control and signal processing systems, ADCs provide the interface between the analog and digital signal domains. Once digitized, signals are then typically processed using algorithms implemented in field programmable gate array (FPGA) logic, general purpose processors (GPPs), digital signal processors (DSPs) or a combination of these. For the recently developed and commissioned RHIC LLRF Upgrade Platform, we've developed a four channel ADC daughter module based on the Linear Technology LTC2209 16 bit, 160 MSPS ADC and the Xilinx V5FX70T FPGA. The module is designed to be relatively generic in application, and with minimal analog filtering on board, is capable of processing signals from DC to 500 MHz or more. The module's first application was to implement the bunch to bucket phase detector (BTB-PD) for the RHIC LLRF system. The same module also provides DC digitizing of analog processed BPM signals used by the LLRF system for radial feedback.« less
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
Inspection of Pole-Like Structures Using a Visual-Inertial Aided VTOL Platform with Shared Autonomy
Sa, Inkyu; Hrabar, Stefan; Corke, Peter
2015-01-01
This paper presents an algorithm and a system for vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures such as light and power distribution poles is a difficult task that is time-consuming, dangerous and expensive. Recently, micro VTOL platforms (i.e., quad-, hexa- and octa-rotors) have been rapidly gaining interest in research, military and even public domains. The unmanned, low-cost and VTOL properties of these platforms make them ideal for situations where inspection would otherwise be time-consuming and/or hazardous to humans. There are, however, challenges involved with developing such an inspection system, for example flying in close proximity to a target while maintaining a fixed stand-off distance from it, being immune to wind gusts and exchanging useful information with the remote user. To overcome these challenges, we require accurate and high-update rate state estimation and high performance controllers to be implemented onboard the vehicle. Ease of control and a live video feed are required for the human operator. We demonstrate a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. Two approaches are presented: Position-Based Visual Servoing (PBVS) using an Extended Kalman Filter (EKF) and estimator-free Image-Based Visual Servoing (IBVS). Both use monocular visual, inertia, and sonar data, allowing the approaches to be applied for indoor or GPS-impaired environments. We extensively compare the performances of PBVS and IBVS in terms of accuracy, robustness and computational costs. Results from simulations and indoor/outdoor (day and night) flight experiments demonstrate the system is able to successfully inspect and circumnavigate a vertical pole. PMID:26340631
HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.
Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo
2014-10-01
To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.
Rohles, Christina Maria; Gießelmann, Gideon; Kohlstedt, Michael; Wittmann, Christoph; Becker, Judith
2016-09-13
The steadily growing world population and our ever luxurious life style, along with the simultaneously decreasing fossil resources has confronted modern society with the issue and need of finding renewable routes to accommodate for our demands. Shifting the production pipeline from raw oil to biomass requires efficient processes for numerous platform chemicals being produced with high yield, high titer and high productivity. In the present work, we established a de novo bio-based production process for the two carbon-5 platform chemicals 5-aminovalerate and glutarate on basis of the lysine-hyperproducing strain Corynebacterium glutamicum LYS-12. Upon heterologous implementation of the Pseudomonas putida genes davA, encoding 5-aminovaleramidase and davB, encoding lysine monooxygenase, 5-aminovalerate production was established. Related to the presence of endogenous genes coding for 5-aminovalerate transaminase (gabT) and glutarate semialdehyde dehydrogenase, 5-aminovalerate was partially converted to glutarate. Moreover, residual L-lysine was secreted as by-product. The issue of by-product formation was then addressed by deletion of the lysE gene, encoding the L-lysine exporter. Additionally, a putative gabT gene was deleted to enhance 5-aminovalerate production. To fully exploit the performance of the optimized strain, fed-batch fermentation was carried out producing 28 g L(-1) 5-aminovalerate with a maximal space-time yield of 0.9 g L(-1) h(-1). The present study describes the construction of a recombinant microbial cell factory for the production of carbon-5 platform chemicals. Beyond a basic proof-of-concept, we were able to specifically increase the production flux of 5-aminovalerate thereby generating a strain with excellent production performance. Additional improvement can be expected by removal of remaining by-product formation and bottlenecks, associated to the terminal pathway, to generate a strain being applicable as centerpiece for a bio-based production of 5-aminovalerate.
qPortal: A platform for data-driven biomedical research.
Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven
2018-01-01
Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.
Moore, J A; Nemat-Gorgani, M; Madison, A C; Sandahl, M A; Punnamaraju, S; Eckhardt, A E; Pollack, M G; Vigneault, F; Church, G M; Fair, R B; Horowitz, M A; Griffin, P B
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols.
Moore, J. A.; Nemat-Gorgani, M.; Madison, A. C.; Punnamaraju, S.; Eckhardt, A. E.; Pollack, M. G.; Church, G. M.; Fair, R. B.; Horowitz, M. A.; Griffin, P. B.
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols. PMID:28191268
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169
Lu, Luyao; Chen, Wei; Xu, Tao; Yu, Luping
2015-06-04
The integration of multiple materials with complementary absorptions into a single junction device is regarded as an efficient way to enhance the power conversion efficiency (PCE) of organic solar cells (OSCs). However, because of increased complexity with one more component, only limited high-performance ternary systems have been demonstrated previously. Here we report an efficient ternary blend OSC with a PCE of 9.2%. We show that the third component can reduce surface trap densities in the ternary blend. Detailed studies unravel that the improved performance results from synergistic effects of enlarged open circuit voltage, suppressed trap-assisted recombination, enhanced light absorption, increased hole extraction, efficient energy transfer and better morphology. The working mechanism and high device performance demonstrate new insights and design guidelines for high-performance ternary blend solar cells and suggest that ternary structure is a promising platform to boost the efficiency of OSCs.
Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node
NASA Astrophysics Data System (ADS)
Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi
2015-03-01
In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.
Huang, Chen-Yu; Keall, Paul; Rice, Adam; Colvill, Emma; Ng, Jin Aun; Booth, Jeremy T
2017-09-01
Inter-fraction and intra-fraction motion management methods are increasingly applied clinically and require the development of advanced motion platforms to facilitate testing and quality assurance program development. The aim of this study was to assess the performance of a 5 degrees-of-freedom (DoF) programmable motion platform HexaMotion (ScandiDos, Uppsala, Sweden) towards clinically observed tumor motion range, velocity, acceleration and the accuracy requirements of SABR prescribed in AAPM Task Group 142. Performance specifications for the motion platform were derived from literature regarding the motion characteristics of prostate and lung tumor targets required for real time motion management. The performance of the programmable motion platform was evaluated against (1) maximum range, velocity and acceleration (5 DoF), (2) static position accuracy (5 DoF) and (3) dynamic position accuracy using patient-derived prostate and lung tumor motion traces (3 DoF). Translational motion accuracy was compared against electromagnetic transponder measurements. Rotation was benchmarked with a digital inclinometer. The static accuracy and reproducibility for translation and rotation was <0.1 mm or <0.1°, respectively. The accuracy of reproducing dynamic patient motion was <0.3 mm. The motion platform's range met the need to reproduce clinically relevant translation and rotation ranges and its accuracy met the TG 142 requirements for SABR. The range, velocity and acceleration of the motion platform are sufficient to reproduce lung and prostate tumor motion for motion management. Programmable motion platforms are valuable tools in the investigation, quality assurance and commissioning of motion management systems in radiation oncology.
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
Improvement in the amine glass platform by bubbling method for a DNA microarray
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. PMID:26468293
Improvement in the amine glass platform by bubbling method for a DNA microarray.
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.
High-power lightweight external-cavity quantum cascade lasers
NASA Astrophysics Data System (ADS)
Day, Timothy; Takeuchi, Eric B.; Weida, Miles; Arnone, David; Pushkarsky, Michael; Boyden, David; Caffey, David
2009-05-01
Commercially available quantum cascade gain media has been integrated with advanced coating and die attach technologies, mid-IR micro-optics and telecom-style assembly and packaging to yield cutting edge performance. When combined into Daylight's external-cavity quantum cascade laser (ECqcL) platform, multi-Watt output power has been obtained. Daylight will describe their most recent results obtained from this platform, including high cw power from compact hermetically sealed packages and narrow spectral linewidth devices. Fiber-coupling and direct amplitude modulation from such multi-Watt lasers will also be described. In addition, Daylight will present the most recent results from their compact, portable, battery-operated "thermal laser pointers" that are being used for illumination and aiming applications. When combined with thermal imaging technology, such devices provide significant benefits in contrast and identification.
NASA Astrophysics Data System (ADS)
Puche, William S.; Sierra, Javier E.; Moreno, Gustavo A.
2014-08-01
The convergence of new technologies in the digital world has made devices with internet connectivity such as televisions, smatphone, Tablet, Blu-ray, game consoles, among others, to increase more and more. Therefore the major research centers are in the task of improving the network performance to mitigate the bottle neck phenomenon regarding capacity and high transmission rates in information and data. The implementation of standard HbbTV (Hybrid Broadcast Broadband TV), and technological platforms OTT (Over the Top), capable of distributing video, audio, TV, and other Internet services via devices connected directly to the cloud. Therefore a model to improve the transmission capacity required by content distribution networks (CDN) for online TV, with high-capacity optical networks is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, Willie
2016-02-11
Final report detailing the work performed on DESC0005240 at Boston College. Report details research into metamaterial absorber theory, thermophotovoltaics a dynamic 3 state material capable of switching between transmissive, reflective, and absorptive states. Also high temperature NIR metamaterials are explored.
Scientists from two U.S. national laboratories, industry, and academia today launched an unprecedented effort to transform the way cancer drugs are discovered by creating an open and sharable platform that integrates high-performance computing, share
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-08-14
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-01-01
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210
Agnolet, Sara; Wiese, Stefanie; Verpoorte, Robert; Staerk, Dan
2012-11-02
Here, proof-of-concept of a new analytical platform used for the comprehensive analysis of a small set of commercial willow bark products is presented, and compared with a traditional standardization solely based on analysis of salicin and salicin derivatives. The platform combines principal component analysis (PCA) of two chemical fingerprints, i.e., HPLC and (1)H NMR data, and a pharmacological fingerprint, i.e., high-resolution 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonate) radical cation (ABTS(+)) reduction profile, with targeted identification of constituents of interest by hyphenated HPLC-solid-phase extraction-tube transfer NMR, i.e., HPLC-SPE-ttNMR. Score plots from PCA of HPLC and (1)H NMR fingerprints showed the same distinct grouping of preparations formulated as capsules of Salix alba bark and separation of S. alba cortex. Loading plots revealed this to be due to high amount of salicin in capsules and ampelopsin, taxifolin, 7-O-methyltaxifolin-3'-O-glucoside, and 7-O-methyltaxifolin in S. alba cortex, respectively. PCA of high-resolution radical scavenging profiles revealed clear separation of preparations along principal component 1 due to the major radical scavengers (+)-catechin and ampelopsin. The new analytical platform allowed identification of 16 compounds in commercial willow bark extracts, and identification of ampelopsin, taxifolin, 7-O-methyltaxifolin-3'-O-glucoside, and 7-O-methyltaxifolin in S. alba bark extract is reported for the first time. The detection of the novel compound, ethyl 1-hydroxy-6-oxocyclohex-2-enecarboxylate, is also described. Copyright © 2012 Elsevier B.V. All rights reserved.
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
NASA Astrophysics Data System (ADS)
Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian
2016-11-01
Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.
Design distributed simulation platform for vehicle management system
NASA Astrophysics Data System (ADS)
Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua
2006-11-01
Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.
Open systems storage platforms
NASA Technical Reports Server (NTRS)
Collins, Kirby
1992-01-01
The building blocks for an open storage system includes a system platform, a selection of storage devices and interfaces, system software, and storage applications CONVEX storage systems are based on the DS Series Data Server systems. These systems are a variant of the C3200 supercomputer with expanded I/O capabilities. These systems support a variety of medium and high speed interfaces to networks and peripherals. System software is provided in the form of ConvexOS, a POSIX compliant derivative of 4.3BSD UNIX. Storage applications include products such as UNITREE and EMASS. With the DS Series of storage systems, Convex has developed a set of products which provide open system solutions for storage management applications. The systems are highly modular, assembled from off the shelf components with industry standard interfaces. The C Series system architecture provides a stable base, with the performance and reliability of a general purpose platform. This combination of a proven system architecture with a variety of choices in peripherals and application software allows wide flexibility in configurations, and delivers the benefits of open systems to the mass storage world.
Design and use of a sparged platform for energy flux measurements over lakes
NASA Astrophysics Data System (ADS)
Gijsbers, S.; Wenker, K.; van Emmerik, T.; de Jong, S.; Annor, F.; Van De Giesen, N.
2012-12-01
Energy flux measurements over lakes or reservoirs demand relatively stable platforms. Platforms can not be stabilized by fixing them on the bottom of the lake when the water body is too deep or when water levels show significant fluctuations. We present the design and first operational results of a sparged platform. The structure consists of a long PVC pipe, the sparge, which is closed at the bottom. On the PVC pipe rests an aluminum frame platform that carries instrumentation and solar power panel. In turn, the platform rests partially on a large inflated tire. At the bottom of the PVC pipe, lead weights and batteries were placed to ensure a very low point of gravity to minimize wave impact on the platform movement. The tire ensures a large second moment of the water plane. The overall volume of displacement is small in this sparged design. The combination of large second momentum of the water plane and small displacement ensure a high placement of the metacenter. The distance between the point of gravity and the metacenter is relatively long and the weight is large due to the weights and batteries. This ensures that the eigenfrequency of the platform is very low. The instrumentation load consisted of a WindMaster Pro (sonic anemometer for 3D wind speed and air temperature to perform eddy covariance measurements of sensible heat flux), a NR Lite (net radiometer), and air temperature and relative humidity sensors. The platform had a wind vane and the sparge could turn freely around its anchor cable to ensure that the anemometer always faced upwind. A compass in the logger completed this setup. The stability was measured with an accelerometer. In addition to the design and its stability, some first energy flux results will be presented.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
2016-10-19
A heavy-lift crane lifts the first half of the C-level work platforms, C south, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle floor of the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida. The C platform will be installed on the south side of High Bay 3. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
2016-11-10
A heavy-lift crane lifts the second half of the C-level work platforms, C north, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle floor of the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida. The C platform will be installed on the north side of High Bay 3. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
2016-10-19
A heavy-lift crane lifts the first half of the C-level work platforms, C south, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle floor of the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida. The C platform will be moved into High Bay 3 for installation on the south wall. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
2016-11-10
A heavy-lift crane lifts the second half of the C-level work platforms, C north, for NASA’s Space Launch System (SLS) rocket, high up from the transfer aisle floor of the Vehicle Assembly Building (VAB) at NASA’s Kennedy Space Center in Florida. The C platform will be moved into High Bay 3 for installation on the north wall. The C platforms are the eighth of 10 levels of work platforms that will surround and provide access to the SLS rocket and Orion spacecraft for Exploration Mission 1. The Ground Systems Development and Operations Program is overseeing upgrades and modifications to VAB High Bay 3, including installation of the new work platforms, to prepare for NASA’s Journey to Mars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohlbrenner, R; Kolli, KP; Taylor, A
2014-06-01
Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treatmore » HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav Kolli and Robert G. Gould for time devoted to the study. Data acquisition and analysis was performed by the authors independent of the funding source.« less
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
Evaluation of commercially available small RNASeq library preparation kits using low input RNA.
Yeri, Ashish; Courtright, Amanda; Danielson, Kirsty; Hutchins, Elizabeth; Alsop, Eric; Carlson, Elizabeth; Hsieh, Michael; Ziegler, Olivia; Das, Avash; Shah, Ravi V; Rozowsky, Joel; Das, Saumya; Van Keuren-Jensen, Kendall
2018-05-05
Evolving interest in comprehensively profiling the full range of small RNAs present in small tissue biopsies and in circulating biofluids, and how the profile differs with disease, has launched small RNA sequencing (RNASeq) into more frequent use. However, known biases associated with small RNASeq, compounded by low RNA inputs, have been both a significant concern and a hurdle to widespread adoption. As RNASeq is becoming a viable choice for the discovery of small RNAs in low input samples and more labs are employing it, there should be benchmark datasets to test and evaluate the performance of new sequencing protocols and operators. In a recent publication from the National Institute of Standards and Technology, Pine et al., 2018, the investigators used a commercially available set of three tissues and tested performance across labs and platforms. In this paper, we further tested the performance of low RNA input in three commonly used and commercially available RNASeq library preparation kits; NEB Next, NEXTFlex, and TruSeq small RNA library preparation. We evaluated the performance of the kits at two different sites, using three different tissues (brain, liver, and placenta) with high (1 μg) and low RNA (10 ng) input from tissue samples, or 5.0, 3.0, 2.0, 1.0, 0.5, and 0.2 ml starting volumes of plasma. As there has been a lack of robust validation platforms for differentially expressed miRNAs, we also compared low input RNASeq data with their expression profiles on three different platforms (Abcam Fireplex, HTG EdgeSeq, and Qiagen miRNome). The concordance of RNASeq results on these three platforms was dependent on the RNA expression level; the higher the expression, the better the reproducibility. The results provide an extensive analysis of small RNASeq kit performance using low RNA input, and replication of these data on three downstream technologies.
The partial coherence modulation transfer function in testing lithography lens
NASA Astrophysics Data System (ADS)
Huang, Jiun-Woei
2018-03-01
Due to the lithography demanding high performance in projection of semiconductor mask to wafer, the lens has to be almost free in spherical and coma aberration, thus, in situ optical testing for diagnosis of lens performance has to be established to verify the performance and to provide the suggesting for further improvement of the lens, before the lens has been build and integrated with light source. The measurement of modulation transfer function of critical dimension (CD) is main performance parameter to evaluate the line width of semiconductor platform fabricating ability for the smallest line width of producing tiny integrated circuits. Although the modulation transfer function (MTF) has been popularly used to evaluation the optical system, but in lithography, the contrast of each line-pair is in one dimension or two dimensions, analytically, while the lens stand along in the test bench integrated with the light source coherent or near coherent for the small dimension near the optical diffraction limit, the MTF is not only contributed by the lens, also by illumination of platform. In the study, the partial coherence modulation transfer function (PCMTF) for testing a lithography lens is suggested by measuring MTF in the high spatial frequency of in situ lithography lens, blended with the illumination of partial and in coherent light source. PCMTF can be one of measurement to evaluate the imperfect lens of lithography lens for further improvement in lens performance.
Flight Simulator Platform Motion and Air Transport Pilot Training
NASA Technical Reports Server (NTRS)
Lee, Alfred T.; Bussolari, Steven R.
1989-01-01
The influence of flight simulator platform motion on pilot training and performance was examined In two studies utilizing a B-727-200 aircraft simulator. The simulator, located at Ames Research Center, Is certified by the FAA for upgrade and transition training in air carrier operations. Subjective ratings and objective performance of experienced B-727 pilots did not reveal any reliable effects of wide variations In platform motion de- sign. Motion platform variations did, however, affect the acquisition of control skill by pilots with no prior heavy aircraft flying experience. The effect was limited to pitch attitude control inputs during the early phase of landing training. Implications for the definition of platform motion requirements in air transport pilot training are discussed.
NASA Astrophysics Data System (ADS)
Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.
2013-12-01
In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed on JUQUEEN with processor counts on the order of 10,000. The instrumentation is used in weak and strong scaling studies with real data cases and hypothetical idealized numerical experiments for detailed profiling and tracing analysis. The profiling is not only useful in identifying wait states that are due to the MPMD execution model, but also in fine-tuning resource allocation to the component models in search of the most suitable load balancing. This is especially necessary, as with numerical experiments that cover multiple (high resolution) spatial scales, the time stepping, coupling frequencies, and communication overheads are constantly shifting, which makes it necessary to re-determine the model setup with each new experimental design.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.
Experimental verification of Space Platform battery discharger design optimization
NASA Astrophysics Data System (ADS)
Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.
The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.
Adaptive Tunable Laser Spectrometer for Space Applications
NASA Technical Reports Server (NTRS)
Flesch, Gregory; Keymeulen, Didier
2010-01-01
An architecture and process for the rapid prototyping and subsequent development of an adaptive tunable laser absorption spectrometer (TLS) are described. Our digital hardware/firmware/software platform is both reconfigurable at design time as well as autonomously adaptive in real-time for both post-integration and post-launch situations. The design expands the range of viable target environments and enhances tunable laser spectrometer performance in extreme and even unpredictable environments. Through rapid prototyping with a commercial RTOS/FPGA platform, we have implemented a fully operational tunable laser spectrometer (using a highly sensitive second harmonic technique). With this prototype, we have demonstrated autonomous real-time adaptivity in the lab with simulated extreme environments.