REopt: A Platform for Energy System Integration and Optimization: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, T.; Cutler, D.; Anderson, K.
2014-08-01
REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less
A self optimizing synthetic organic reactor system using real-time in-line NMR spectroscopy.
Sans, Victor; Porwol, Luzian; Dragone, Vincenza; Cronin, Leroy
2015-02-01
A configurable platform for synthetic chemistry incorporating an in-line benchtop NMR that is capable of monitoring and controlling organic reactions in real-time is presented. The platform is controlled via a modular LabView software control system for the hardware, NMR, data analysis and feedback optimization. Using this platform we report the real-time advanced structural characterization of reaction mixtures, including 19 F, 13 C, DEPT, 2D NMR spectroscopy (COSY, HSQC and 19 F-COSY) for the first time. Finally, the potential of this technique is demonstrated through the optimization of a catalytic organic reaction in real-time, showing its applicability to self-optimizing systems using criteria such as stereoselectivity, multi-nuclear measurements or 2D correlations.
Development of Decision Analysis Specifically for Arctic Offshore Drilling Islands.
1985-12-01
the decision analysis method will - give tradeoffs between costs and design wave height, production and depth • :of water for an oil platform , etc...optimizing the type of platform that is best suited for a particular site has become an extremely difficult decision. Over fifty- one different types of...drilling and production platforms have been identified for the Arctic environment, with new concepts being developed - every year, Boslov et al (198j
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Steyer, Benjamin; Carlson-Stevermer, Jared; Angenent-Mari, Nicolas; Khalil, Andrew; Harkness, Ty; Saha, Krishanu
2016-04-01
Non-viral gene-editing of human cells using the CRISPR-Cas9 system requires optimized delivery of multiple components. Both the Cas9 endonuclease and a single guide RNA, that defines the genomic target, need to be present and co-localized within the nucleus for efficient gene-editing to occur. This work describes a new high-throughput screening platform for the optimization of CRISPR-Cas9 delivery strategies. By exploiting high content image analysis and microcontact printed plates, multi-parametric gene-editing outcome data from hundreds to thousands of isolated cell populations can be screened simultaneously. Employing this platform, we systematically screened four commercially available cationic lipid transfection materials with a range of RNAs encoding the CRISPR-Cas9 system. Analysis of Cas9 expression and editing of a fluorescent mCherry reporter transgene within human embryonic kidney cells was monitored over several days after transfection. Design of experiments analysis enabled rigorous evaluation of delivery materials and RNA concentration conditions. The results of this analysis indicated that the concentration and identity of transfection material have significantly greater effect on gene-editing than ratio or total amount of RNA. Cell subpopulation analysis on microcontact printed plates, further revealed that low cell number and high Cas9 expression, 24h after CRISPR-Cas9 delivery, were strong predictors of gene-editing outcomes. These results suggest design principles for the development of materials and transfection strategies with lipid-based materials. This platform could be applied to rapidly optimize materials for gene-editing in a variety of cell/tissue types in order to advance genomic medicine, regenerative biology and drug discovery. CRISPR-Cas9 is a new gene-editing technology for "genome surgery" that is anticipated to treat genetic diseases. This technology uses multiple components of the Cas9 system to cut out disease-causing mutations in the human genome and precisely suture in therapeutic sequences. Biomaterials based delivery strategies could help transition these technologies to the clinic. The design space for materials based delivery strategies is vast and optimization is essential to ensuring the safety and efficacy of these treatments. Therefore, new methods are required to rapidly and systematically screen gene-editing efficacy in human cells. This work utilizes an innovative platform to generate and screen many formulations of synthetic biomaterials and components of the CRISPR-Cas9 system in parallel. On this platform, we watch genome surgery in action using high content image analysis. These capabilities enabled us to identify formulation parameters for Cas9-material complexes that can optimize gene-editing in a specific human cell type. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Xiao-xu; Han, Jun-feng; Wei, Yu; Zhang, Jing; Xie, Mei-lin; Yue, Peng
2016-01-01
High precision tracking platform of celestial navigation with control mirror servo structure form, to solve the disadvantages of big volume and rotational inertia, slow response speed, and so on. It improved the stability and tracking accuracy of platform. Due to optical sensor and mirror are installed on the middle-gimbal, stiffness and resonant frequency requirement for high. Based on the application of finite element modality analysis theory, doing Research on dynamic characteristics of the middle-gimbal, and ANSYS was used for the finite element dynamic emulator analysis. According to the result of the computer to find out the weak links of the structure, and Put forward improvement suggestions and reanalysis. The lowest resonant frequency of optimization middle-gimbal avoid the bandwidth of the platform servo mechanism, and much higher than the disturbance frequency of carrier aircraft, and reduces mechanical resonance of the framework. Reaching provides a theoretical basis for the whole machine structure optimization design of high-precision of autonomous Celestial navigation tracking mirror system.
Sans, Victor; Porwol, Luzian; Dragone, Vincenza
2015-01-01
A configurable platform for synthetic chemistry incorporating an in-line benchtop NMR that is capable of monitoring and controlling organic reactions in real-time is presented. The platform is controlled via a modular LabView software control system for the hardware, NMR, data analysis and feedback optimization. Using this platform we report the real-time advanced structural characterization of reaction mixtures, including 19F, 13C, DEPT, 2D NMR spectroscopy (COSY, HSQC and 19F-COSY) for the first time. Finally, the potential of this technique is demonstrated through the optimization of a catalytic organic reaction in real-time, showing its applicability to self-optimizing systems using criteria such as stereoselectivity, multi-nuclear measurements or 2D correlations. PMID:29560211
Design and development of a microfluidic platform for use with colorimetric gold nanoprobe assays
NASA Astrophysics Data System (ADS)
Bernacka-Wojcik, Iwona
Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio ( 13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 mul on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/mul) with 10 times lower solution volume (3 mul). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Yang, Yanling; Li, Yuxin
2015-02-06
Development of high resolution liquid chromatography (LC) is essential for improving the sensitivity and throughput of mass spectrometry (MS)-based proteomics. Here we present systematic optimization of a long gradient LC-MS/MS platform to enhance protein identification from a complex mixture. The platform employed an in-house fabricated, reverse phase column (100 μm x 150 cm) coupled with Q Exactive MS. The column was capable of achieving a peak capacity of approximately 700 in a 720 min gradient of 10-45% acetonitrile. The optimal loading level was about 6 micrograms of peptides, although the column allowed loading as many as 20 micrograms. Gas phasemore » fractionation of peptide ions further increased the number of peptide identification by ~10%. Moreover, the combination of basic pH LC pre-fractionation with the long gradient LC-MS/MS platform enabled the identification of 96,127 peptides and 10,544 proteins at 1% protein false discovery rate in a postmortem brain sample of Alzheimer’s disease. As deep RNA sequencing of the same specimen suggested that ~16,000 genes were expressed, current analysis covered more than 60% of the expressed proteome. Further improvement strategies of the LC/LC-MS/MS platform were also discussed.« less
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1992-01-01
A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.
Tolstikhin, Valery; Saeidi, Shayan; Dolgaleva, Ksenia
2018-05-01
We report on the design optimization and tolerance analysis of a multistep lateral-taper spot-size converter based on indium phosphide (InP), performed using the Monte Carlo method. Being a natural fit to (and a key building block of) the regrowth-free taper-assisted vertical integration platform, such a spot-size converter enables efficient and displacement-tolerant fiber coupling to InP-based photonic integrated circuits at a wavelength of 1.31 μm. An exemplary four-step lateral-taper design featuring 0.35 dB coupling loss at optimal alignment of a standard single-mode fiber; ≥7 μm 1 dB displacement tolerance in any direction in a facet plane; and great stability against manufacturing variances is demonstrated.
MetaNET--a web-accessible interactive platform for biological metabolic network analysis.
Narang, Pankaj; Khan, Shawez; Hemrom, Anmol Jaywant; Lynn, Andrew Michael
2014-01-01
Metabolic reactions have been extensively studied and compiled over the last century. These have provided a theoretical base to implement models, simulations of which are used to identify drug targets and optimize metabolic throughput at a systemic level. While tools for the perturbation of metabolic networks are available, their applications are limited and restricted as they require varied dependencies and often a commercial platform for full functionality. We have developed MetaNET, an open source user-friendly platform-independent and web-accessible resource consisting of several pre-defined workflows for metabolic network analysis. MetaNET is a web-accessible platform that incorporates a range of functions which can be combined to produce different simulations related to metabolic networks. These include (i) optimization of an objective function for wild type strain, gene/catalyst/reaction knock-out/knock-down analysis using flux balance analysis. (ii) flux variability analysis (iii) chemical species participation (iv) cycles and extreme paths identification and (v) choke point reaction analysis to facilitate identification of potential drug targets. The platform is built using custom scripts along with the open-source Galaxy workflow and Systems Biology Research Tool as components. Pre-defined workflows are available for common processes, and an exhaustive list of over 50 functions are provided for user defined workflows. MetaNET, available at http://metanet.osdd.net , provides a user-friendly rich interface allowing the analysis of genome-scale metabolic networks under various genetic and environmental conditions. The framework permits the storage of previous results, the ability to repeat analysis and share results with other users over the internet as well as run different tools simultaneously using pre-defined workflows, and user-created custom workflows.
Wind Turbine Optimization with WISDEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykes, Katherine L; Damiani, Rick R; Graf, Peter A
This presentation for the Fourth Wind Energy Systems Engineering Workshop explains the NREL wind energy systems engineering initiative-developed analysis platform and research capability to capture important system interactions to achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. Topics include Wind-Plant Integrated System Design and Engineering Model (WISDEM) and multidisciplinary design analysis and optimization.
NASA Astrophysics Data System (ADS)
Crockett, Derick
Detailed observations of geosynchronous satellites from earth are very limited. To better inspect these high altitude satellites, the use of small, refuelable satellites is proposed. The small satellites are stationed on a carrier platform in an orbit near the population of geosynchronous satellites. A carrier platform equipped with deployable, refuelable SmallSats is a viable option to inspect geosynchronous satellites. The propellant requirement to transfer to a targeted geosynchronous satellite, perform a proximity inspection mission, and transfer back to the carrier platform in a nearby orbit is determined. Convex optimization and traditional optimization techniques are explored, determining minimum propellant trajectories. Propellant is measured by the total required change in velocity, delta-v. The trajectories were modeled in a relative reference frame using the Clohessy-Wiltshire equations. Mass estimations for the carrier platform and the SmallSat were determined by using the rocket equation. The mass estimates were compared to the mass of a single, non-refuelable satellite performing the same geosynchronous satellite inspection missions. From the minimum delta-v trajectories and the mass analysis, it is determined that using refuelable SmallSats and a carrier platform in a nearby orbit can be more efficient than using a single non-refuelable satellite to perform multiple geosynchronous satellite inspections.
OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data
Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun
2014-01-01
Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529
OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.
Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun
2014-01-01
Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.
A Framework for Daylighting Optimization in Whole Buildings with OpenStudio
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-08-12
We present a toolkit and workflow for leveraging the OpenStudio (Guglielmetti et al. 2010) platform to perform daylighting analysis and optimization in a whole building energy modeling (BEM) context. We have re-implemented OpenStudio's integrated Radiance and EnergyPlus functionality as an OpenStudio Measure. The OpenStudio Radiance Measure works within the OpenStudio Application and Parametric Analysis Tool, as well as the OpenStudio Server large scale analysis framework, allowing a rigorous daylighting simulation to be performed on a single building model or potentially an entire population of programmatically generated models. The Radiance simulation results can automatically inform the broader building energy model, andmore » provide dynamic daylight metrics as a basis for decision. Through introduction and example, this paper illustrates the utility of the OpenStudio building energy modeling platform to leverage existing simulation tools for integrated building energy performance simulation, daylighting analysis, and reportage.« less
Chakrabortty, S; Sen, M; Pal, P
2014-03-01
A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing.
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing. PMID:14519201
Cost-Effectiveness Analysis of Aerial Platforms and Suitable Communication Payloads
2014-03-01
High altitude long endurance (HALE) platforms for tactical wireless communications and sensor use in military operations. (Master’s thesis, Naval...the ground, which can offer near limitless endurance. Additionally, running data over wired networks reduces wireless congestion. The most...system that utilizes different wind speeds and wind directions at different altitudes in an attempt to position the balloons for optimal communications
Multi-Criterion Preliminary Design of a Tetrahedral Truss Platform
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey
1995-01-01
An efficient method is presented for multi-criterion preliminary design and demonstrated for a tetrahedral truss platform. The present method requires minimal analysis effort and permits rapid estimation of optimized truss behavior for preliminary design. A 14-m-diameter, 3-ring truss platform represents a candidate reflector support structure for space-based science spacecraft. The truss members are divided into 9 groups by truss ring and position. Design variables are the cross-sectional area of all members in a group, and are either 1, 3 or 5 times the minimum member area. Non-structural mass represents the node and joint hardware used to assemble the truss structure. Taguchi methods are used to efficiently identify key points in the set of Pareto-optimal truss designs. Key points identified using Taguchi methods are the maximum frequency, minimum mass, and maximum frequency-to-mass ratio truss designs. Low-order polynomial curve fits through these points are used to approximate the behavior of the full set of Pareto-optimal designs. The resulting Pareto-optimal design curve is used to predict frequency and mass for optimized trusses. Performance improvements are plotted in frequency-mass (criterion) space and compared to results for uniform trusses. Application of constraints to frequency and mass and sensitivity to constraint variation are demonstrated.
Maleknia, Laleh; Dilamian, Mandana; Pilehrood, Mohammad Kazemi; Sadeghi-Aliabadi, Hojjat; Hekmati, Amir Houshang
2018-06-01
In this paper, polyurethane (PU), chitosan (Cs)/polyethylene oxide (PEO), and core-shell PU/Cs nanofibers were produced at the optimal processing conditions using electrospinning technique. Several methods including SEM, TEM, FTIR, XRD, DSC, TGA and image analysis were utilized to characterize these nanofibrous structures. SEM images exhibited that the core-shell PU/Cs nanofibers were spun without any structural imperfections at the optimized processing conditions. TEM image confirmed the PU/Cs core-shell nanofibers were formed apparently. It that seems the inclusion of Cs/PEO to the shell, did not induce the significant variations in the crystallinity in the core-shell nanofibers. DSC analysis showed that the inclusion of Cs/PEO led to the glass temperature of the composition increased significantly compared to those of neat PU nanofibers. The thermal degradation of core-shell PU/Cs was similar to PU nanofibers degradation due to the higher PU concentration compared to other components. It was hypothesized that the core-shell PU/Cs nanofibers can be used as a potential platform for the bioactive scaffolds in tissue engineering. Further biological tests should be conducted to evaluate this platform as a three dimensional scaffold with the capabilities of releasing the bioactive molecules in a sustained manner.
Sung, Wen-Tsai; Chiang, Yen-Chun
2012-12-01
This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.
An optimized protocol for generation and analysis of Ion Proton sequencing reads for RNA-Seq.
Yuan, Yongxian; Xu, Huaiqian; Leung, Ross Ka-Kit
2016-05-26
Previous studies compared running cost, time and other performance measures of popular sequencing platforms. However, comprehensive assessment of library construction and analysis protocols for Proton sequencing platform remains unexplored. Unlike Illumina sequencing platforms, Proton reads are heterogeneous in length and quality. When sequencing data from different platforms are combined, this can result in reads with various read length. Whether the performance of the commonly used software for handling such kind of data is satisfactory is unknown. By using universal human reference RNA as the initial material, RNaseIII and chemical fragmentation methods in library construction showed similar result in gene and junction discovery number and expression level estimated accuracy. In contrast, sequencing quality, read length and the choice of software affected mapping rate to a much larger extent. Unspliced aligner TMAP attained the highest mapping rate (97.27 % to genome, 86.46 % to transcriptome), though 47.83 % of mapped reads were clipped. Long reads could paradoxically reduce mapping in junctions. With reference annotation guide, the mapping rate of TopHat2 significantly increased from 75.79 to 92.09 %, especially for long (>150 bp) reads. Sailfish, a k-mer based gene expression quantifier attained highly consistent results with that of TaqMan array and highest sensitivity. We provided for the first time, the reference statistics of library preparation methods, gene detection and quantification and junction discovery for RNA-Seq by the Ion Proton platform. Chemical fragmentation performed equally well with the enzyme-based one. The optimal Ion Proton sequencing options and analysis software have been evaluated.
Next-generation simulation and optimization platform for forest management and analysis
Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki
2009-01-01
Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...
Thege, Fredrik I; Lannin, Timothy B; Saha, Trisha N; Tsai, Shannon; Kochman, Michael L; Hollingsworth, Michael A; Rhim, Andrew D; Kirby, Brian J
2014-05-21
We have developed and optimized a microfluidic device platform for the capture and analysis of circulating pancreatic cells (CPCs) and pancreatic circulating tumor cells (CTCs). Our platform uses parallel anti-EpCAM and cancer-specific mucin 1 (MUC1) immunocapture in a silicon microdevice. Using a combination of anti-EpCAM and anti-MUC1 capture in a single device, we are able to achieve efficient capture while extending immunocapture beyond single marker recognition. We also have detected a known oncogenic KRAS mutation in cells spiked in whole blood using immunocapture, RNA extraction, RT-PCR and Sanger sequencing. To allow for downstream single-cell genetic analysis, intact nuclei were released from captured cells by using targeted membrane lysis. We have developed a staining protocol for clinical samples, including standard CTC markers; DAPI, cytokeratin (CK) and CD45, and a novel marker of carcinogenesis in CPCs, mucin 4 (MUC4). We have also demonstrated a semi-automated approach to image analysis and CPC identification, suitable for clinical hypothesis generation. Initial results from immunocapture of a clinical pancreatic cancer patient sample show that parallel capture may capture more of the heterogeneity of the CPC population. With this platform, we aim to develop a diagnostic biomarker for early pancreatic carcinogenesis and patient risk stratification.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
address fundamental research problems of representation and invariant description of3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
address fundamental research problems of representation and invariant description of 3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.
Long Read Alignment with Parallel MapReduce Cloud Platform
Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki
2015-01-01
Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms. PMID:26839887
Long Read Alignment with Parallel MapReduce Cloud Platform.
Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki
2015-01-01
Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.
Modeling and analysis of a flywheel microvibration isolation system for spacecrafts
NASA Astrophysics Data System (ADS)
Wei, Zhanji; Li, Dongxu; Luo, Qing; Jiang, Jianping
2015-01-01
The microvibrations generated by flywheels running at full speed onboard high precision spacecrafts will affect stability of the spacecraft bus and further degrade pointing accuracy of the payload. A passive vibration isolation platform comprised of multi-segment zig-zag beams is proposed to isolate disturbances of the flywheel. By considering the flywheel and the platform as an integral system with gyroscopic effects, an equivalent dynamic model is developed and verified through eigenvalue and frequency response analysis. The critical speeds of the system are deduced and expressed as functions of system parameters. The vibration isolation performance of the platform under synchronal and high-order harmonic disturbances caused by the flywheel is investigated. It is found that the speed range within which the passive platform is effective and the disturbance decay rate of the system are greatly influenced by the locations of the critical speeds. Structure optimization of the platform is carried out to enhance its performance. Simulation results show that a properly designed vibration isolation platform can effectively reduce disturbances emitted by the flywheel operating above the critical speeds of the system.
Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D
2014-02-01
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.
Scaling Support Vector Machines On Modern HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Fu, Haohuan; Song, Shuaiwen
2015-02-01
We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.
Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Carter, Jonathan; Oliker, Leonid
2008-02-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less
Lattice Boltzmann simulation optimization on leading multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, S.; Carter, J.; Oliker, L.
2008-01-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less
ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.
Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B
2016-01-01
Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Trujillo, Susie
During calendar year 2017, Sandia National Laboratories (SNL) made strides towards developing an open portable design platform rich in highperformance computing (HPC) enabled modeling, analysis and synthesis tools. The main focus was to lay the foundations of the core interfaces that will enable plug-n-play insertion of synthesis optimization technologies in the areas of modeling, analysis and synthesis.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J
2014-10-01
This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. Copyright © 2014 Elsevier Inc. All rights reserved.
An architecture for genomics analysis in a clinical setting using Galaxy and Docker
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A
2017-01-01
Abstract Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. PMID:29048555
An architecture for genomics analysis in a clinical setting using Galaxy and Docker.
Digan, W; Countouris, H; Barritault, M; Baudoin, D; Laurent-Puig, P; Blons, H; Burgun, A; Rance, B
2017-11-01
Next-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. © The Author 2017. Published by Oxford University Press.
Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring
Lazarescu, Mihai T.
2015-01-01
Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery. PMID:25912349
Selection and optimization of mooring cables on floating platform for special purposes
NASA Astrophysics Data System (ADS)
Ma, Guang-ying; Yao, Yun-long; Zhao, Chen-yao
2017-08-01
This paper studied a new type of assembled marine floating platform for special purposes. The selection and optimization of mooring cables on the floating platform are studied. By using ANSYS AQWA software, the hydrodynamic model of the platform was established to calculate the time history response of the platform motion under complex water environments, such as wind, wave, current and mooring. On this basis, motion response and cable tension were calculated with different cable mooring states under the designed environmental load. Finally, the best mooring scheme to meet the cable strength requirements was proposed, which can lower the motion amplitude of the platform effectively.
The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality
NASA Technical Reports Server (NTRS)
Conway, Darrel J.; Hughes, Steven P.
2010-01-01
The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).
Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Carter, Jonathan; Oliker, Leonid
2009-04-10
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well asmore » a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-08-14
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.
Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao
2015-01-01
This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210
A Decision Analysis Tool for Climate Impacts, Adaptations, and Vulnerabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Parish, Esther S; Nugent, Philip J
Climate change related extreme events (such as flooding, storms, and drought) are already impacting millions of people globally at a cost of billions of dollars annually. Hence, there are urgent needs for urban areas to develop adaptation strategies that will alleviate the impacts of these extreme events. However, lack of appropriate decision support tools that match local applications is limiting local planning efforts. In this paper, we present a quantitative analysis and optimization system with customized decision support modules built on geographic information system (GIS) platform to bridge this gap. This platform is called Urban Climate Adaptation Tool (Urban-CAT). Formore » all Urban-CAT models, we divide a city into a grid with tens of thousands of cells; then compute a list of metrics for each cell from the GIS data. These metrics are used as independent variables to predict climate impacts, compute vulnerability score, and evaluate adaptation options. Overall, the Urban-CAT system has three layers: data layer (that contains spatial data, socio-economic and environmental data, and analytic data), middle layer (that handles data processing, model management, and GIS operation), and application layer (that provides climate impacts forecast, adaptation optimization, and site evaluation). The Urban-CAT platform can guide city and county governments in identifying and planning for effective climate change adaptation strategies.« less
Analytical investigation of the dynamics of tethered constellations in earth orbit
NASA Technical Reports Server (NTRS)
Lorenzini, Enrico C.; Gullahorn, Gordon E.; Estes, Robert D.
1988-01-01
This Quarterly Report on Tethering in Earth Orbit deals with three topics: (1) Investigation of the propagation of longitudinal and transverse waves along the upper tether. Specifically, the upper tether is modeled as three massive platforms connected by two perfectly elastic continua (tether segments). The tether attachment point to the station is assumed to vibrate both longitudinally and transversely at a given frequency. Longitudinal and transverse waves propagate along the tethers affecting the acceleration levels at the elevator and at the upper platform. The displacement and acceleration frequency-response functions at the elevator and at the upper platform are computed for both longitudinal and transverse waves. An analysis to optimize the damping time of the longitudinal dampers is also carried out in order to select optimal parameters. The analytical evaluation of the performance of tuned vs. detuned longitudinal dampers is also part of this analysis. (2) The use of the Shuttle primary Reaction Control System (RCS) thrusters for blowing away a recoiling broken tether is discussed. A microcomputer system was set up to support this operation. (3) Most of the effort in the tether plasma physics study was devoted to software development. A particle simulation code has been integrated into the Macintosh II computer system and will be utilized for studying the physics of hollow cathodes.
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
Platform-dependent optimization considerations for mHealth applications
NASA Astrophysics Data System (ADS)
Kaghyan, Sahak; Akopian, David; Sarukhanyan, Hakob
2015-03-01
Modern mobile devices contain integrated sensors that enable multitude of applications in such fields as mobile health (mHealth), entertainment, sports, etc. Human physical activity monitoring is one of such the emerging applications. There exists a range of challenges that relate to activity monitoring tasks, and, particularly, exploiting optimal solutions and architectures for respective mobile software application development. This work addresses mobile computations related to integrated inertial sensors for activity monitoring, such as accelerometers, gyroscopes, integrated global positioning system (GPS) and WLAN-based positioning, that can be used for activity monitoring. Some of the aspects will be discussed in this paper. Each of the sensing data sources has its own characteristics such as specific data formats, data rates, signal acquisition durations etc., and these specifications affect energy consumption. Energy consumption significantly varies as sensor data acquisition is followed by data analysis including various transformations and signal processing algorithms. This paper will address several aspects of more optimal activity monitoring implementations exploiting state-of-the-art capabilities of modern platforms.
Haraksingh, Rajini R; Abyzov, Alexej; Urban, Alexander Eckehart
2017-04-24
High-resolution microarray technology is routinely used in basic research and clinical practice to efficiently detect copy number variants (CNVs) across the entire human genome. A new generation of arrays combining high probe densities with optimized designs will comprise essential tools for genome analysis in the coming years. We systematically compared the genome-wide CNV detection power of all 17 available array designs from the Affymetrix, Agilent, and Illumina platforms by hybridizing the well-characterized genome of 1000 Genomes Project subject NA12878 to all arrays, and performing data analysis using both manufacturer-recommended and platform-independent software. We benchmarked the resulting CNV call sets from each array using a gold standard set of CNVs for this genome derived from 1000 Genomes Project whole genome sequencing data. The arrays tested comprise both SNP and aCGH platforms with varying designs and contain between ~0.5 to ~4.6 million probes. Across the arrays CNV detection varied widely in number of CNV calls (4-489), CNV size range (~40 bp to ~8 Mbp), and percentage of non-validated CNVs (0-86%). We discovered strikingly strong effects of specific array design principles on performance. For example, some SNP array designs with the largest numbers of probes and extensive exonic coverage produced a considerable number of CNV calls that could not be validated, compared to designs with probe numbers that are sometimes an order of magnitude smaller. This effect was only partially ameliorated using different analysis software and optimizing data analysis parameters. High-resolution microarrays will continue to be used as reliable, cost- and time-efficient tools for CNV analysis. However, different applications tolerate different limitations in CNV detection. Our study quantified how these arrays differ in total number and size range of detected CNVs as well as sensitivity, and determined how each array balances these attributes. This analysis will inform appropriate array selection for future CNV studies, and allow better assessment of the CNV-analytical power of both published and ongoing array-based genomics studies. Furthermore, our findings emphasize the importance of concurrent use of multiple analysis algorithms and independent experimental validation in array-based CNV detection studies.
Kafle, Amol; Klaene, Joshua; Hall, Adam B; Glick, James; Coy, Stephen L; Vouros, Paul
2013-07-15
There is continued interest in exploring new analytical technologies for the detection and quantitation of DNA adducts, biomarkers which provide direct evidence of exposure and genetic damage in cells. With the goal of reducing clean-up steps and improving sample throughput, a Differential Mobility Spectrometry/Mass Spectrometry (DMS/MS) platform has been introduced for adduct analysis. A DMS/MS platform has been utilized for the analysis of dG-ABP, the deoxyguanosine adduct of the bladder carcinogen 4-aminobiphenyl (4-ABP). After optimization of the DMS parameters, each sample was analyzed in just 30 s following a simple protein precipitation step of the digested DNA. A detection limit of one modification in 10^6 nucleosides has been achieved using only 2 µg of DNA. A brief comparison (quantitative and qualitative) with liquid chromatography/mass spectrometry is also presented highlighting the advantages of using the DMS/MS method as a high-throughput platform. The data presented demonstrate the successful application of a DMS/MS/MS platform for the rapid quantitation of DNA adducts using, as a model analyte, the deoxyguanosine adduct of the bladder carcinogen 4-aminobiphenyl. Copyright © 2013 John Wiley & Sons, Ltd.
Microgravity vibration isolation: An optimal control law for the one-dimensional case
NASA Technical Reports Server (NTRS)
Hampton, Richard D.; Grodsinsky, Carlos M.; Allaire, Paul E.; Lewis, David W.; Knospe, Carl R.
1991-01-01
Certain experiments contemplated for space platforms must be isolated from the accelerations of the platform. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega) exp 4. Low frequency accelerations are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.
Microgravity vibration isolation: An optimal control law for the one-dimensional case
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.; Knospe, C. R.
1991-01-01
Certain experiments contemplated for space platforms must be isolated from the accelerations of the platforms. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward (preview) gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega)(exp 4). Low frequency accelerations (less than 50 Hz) are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.
Joshi, Varun; Srinivasan, Manoj
2015-02-08
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations.
Joshi, Varun; Srinivasan, Manoj
2015-01-01
Understanding how humans walk on a surface that can move might provide insights into, for instance, whether walking humans prioritize energy use or stability. Here, motivated by the famous human-driven oscillations observed in the London Millennium Bridge, we introduce a minimal mathematical model of a biped, walking on a platform (bridge or treadmill) capable of lateral movement. This biped model consists of a point-mass upper body with legs that can exert force and perform mechanical work on the upper body. Using numerical optimization, we obtain energy-optimal walking motions for this biped, deriving the periodic body and platform motions that minimize a simple metabolic energy cost. When the platform has an externally imposed sinusoidal displacement of appropriate frequency and amplitude, we predict that body motion entrained to platform motion consumes less energy than walking on a fixed surface. When the platform has finite inertia, a mass- spring-damper with similar parameters to the Millennium Bridge, we show that the optimal biped walking motion sustains a large lateral platform oscillation when sufficiently many people walk on the bridge. Here, the biped model reduces walking metabolic cost by storing and recovering energy from the platform, demonstrating energy benefits for two features observed for walking on the Millennium Bridge: crowd synchrony and large lateral oscillations. PMID:25663810
NASA Astrophysics Data System (ADS)
Dong, J. Y.; Cheng, W.; Ma, C. P.; Xin, L. S.; Tan, Y. T.
2017-06-01
In order to study the issue of rural residential energy consumption in cold regions of China, modeled an architecture prototype based on BIM platform according to the affecting factors of rural residential thermal environment, and imported the virtual model which contains building information into energy analysis tools and chose the appropriate building orientation. By analyzing the energy consumption of the residential buildings with different enclosure structure forms, we designed the optimal energy-saving residence form. There is a certain application value of this method for researching the energy consumption and energy-saving design for the rural residence in cold regions of China.
Multispectral tissue characterization for intestinal anastomosis optimization.
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N D; Decker, Ryan; Kim, Peter C W; Kang, Jin U; Krieger, Axel
2015-10-01
Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
Multispectral tissue characterization for intestinal anastomosis optimization
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N. D.; Decker, Ryan; Kim, Peter C. W.; Kang, Jin U.; Krieger, Axel
2015-01-01
Abstract. Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement. PMID:26440616
Improvement in the amine glass platform by bubbling method for a DNA microarray
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. PMID:26468293
Improvement in the amine glass platform by bubbling method for a DNA microarray.
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.
Multidimensional bioseparation with modular microfluidics
Chirica, Gabriela S.; Renzi, Ronald F.
2013-08-27
A multidimensional chemical separation and analysis system is described including a prototyping platform and modular microfluidic components capable of rapid and convenient assembly, alteration and disassembly of numerous candidate separation systems. Partial or total computer control of the separation system is possible. Single or multiple alternative processing trains can be tested, optimized and/or run in parallel. Examples related to the separation and analysis of human bodily fluids are given.
NASA Astrophysics Data System (ADS)
Klesh, Andrew T.
This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for analysis using the coupled model. Flight-paths found with this platform are presented that display the optimal exploration problem characteristics. These characteristics are used to form heuristics, such as a Generalized Traveling Salesman Problem solver, to simplify the exploration problem. These heuristics are used to empirically show the successful completion of an exploration mission by a physical explorer.
An integrated platform for biomolecule interaction analysis
NASA Astrophysics Data System (ADS)
Jan, Chia-Ming; Tsai, Pei-I.; Chou, Shin-Ting; Lee, Shu-Sheng; Lee, Chih-Kung
2013-02-01
We developed a new metrology platform which can detect real-time changes in both a phase-interrogation mode and intensity mode of a SPR (surface plasmon resonance). We integrated a SPR and ellipsometer to a biosensor chip platform to create a new biomolecular interaction measurement mechanism. We adopted a conductive ITO (indium-tinoxide) film to the bio-sensor platform chip to expand the dynamic range and improve measurement accuracy. The thickness of the conductive film and the suitable voltage constants were found to enhance performance. A circularly polarized ellipsometry configuration was incorporated into the newly developed platform to measure the label-free interactions of recombinant human C-reactive protein (CRP) with immobilized biomolecule target monoclonal human CRP antibody at various concentrations. CRP was chosen as it is a cardiovascular risk biomarker and is an acute phase reactant as well as a specific prognostic indicator for inflammation. We found that the sensitivity of a phaseinterrogation SPR is predominantly dependent on the optimization of the sample incidence angle. The effect of the ITO layer effective index under DC and AC effects as well as an optimal modulation were experimentally performed and discussed. Our experimental results showed that the modulated dynamic range for phase detection was 10E-2 RIU based on a current effect and 10E-4 RIU based on a potential effect of which a 0.55 (°/RIU) measurement was found by angular-interrogation. The performance of our newly developed metrology platform was characterized to have a higher sensitivity and less dynamic range when compared to a traditional full-field measurement system.
Conception preliminaire de disques de turbine axiale pour moteurs d'aeronefs
NASA Astrophysics Data System (ADS)
Ouellet, Yannick
The preliminary design phase of a turbine rotor has an important impact on the architecture of a new engine definition, as it sets the technical orientation right from start and provides a good estimate of product performance, weight and cost. In addition, the execution speed at this preliminary phase has become critical into capturing business opportunities. Improving upfront accuracy also alleviates downstream detailed design work and therefore reduces overall product development cycle time. This preliminary phase contains elements slowing down its process, including low interoperability of currently used systems, incompatibility of software and ineffective management of data. In order to overcome these barriers, we have developed the first module of a new Design and Analysis (D&A) platform for the rotor disc. This complete platform ensures integration of different tools processing in batch mode, and is driven from a single graphical user interface. The platform developed has been linked with different optimization methods (algorithms, configuration) in order to automate the disc design and propose best practices for rotor structural optimization. This methodology allowed reduction in design cycle time and improvement of performance. It was applied on two reference P&WC axial discs. The platform's architecture was also used in the development of reference charts to better understand disc performance within given design space. Four high pressure rotor discs of P&WC turbofan and turboprop engines were used to generate the technical charts and understand the effect of various parameters. The new tools supporting disc D&A, combined with the optimization process and reference charts, has proven to be profitable in terms of component performance and engineering effort inputs.
Object-Oriented Multi-Disciplinary Design, Analysis, and Optimization Tool
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2011-01-01
An Object-Oriented Optimization (O3) tool was developed that leverages existing tools and practices, and allows the easy integration and adoption of new state-of-the-art software. At the heart of the O3 tool is the Central Executive Module (CEM), which can integrate disparate software packages in a cross platform network environment so as to quickly perform optimization and design tasks in a cohesive, streamlined manner. This object-oriented framework can integrate the analysis codes for multiple disciplines instead of relying on one code to perform the analysis for all disciplines. The CEM was written in FORTRAN and the script commands for each performance index were submitted through the use of the FORTRAN Call System command. In this CEM, the user chooses an optimization methodology, defines objective and constraint functions from performance indices, and provides starting and side constraints for continuous as well as discrete design variables. The structural analysis modules such as computations of the structural weight, stress, deflection, buckling, and flutter and divergence speeds have been developed and incorporated into the O3 tool to build an object-oriented Multidisciplinary Design, Analysis, and Optimization (MDAO) tool.
Smart Water: Energy-Water Optimization in Drinking Water Systems
This project aims to develop and commercialize a Smart Water Platform – Sensor-based Data-driven Energy-Water Optimization technology in drinking water systems. The key technological advances rely on cross-platform data acquisition and management system, model-based real-time sys...
Multidisciplinary Shape Optimization of a Composite Blended Wing Body Aircraft
NASA Astrophysics Data System (ADS)
Boozer, Charles Maxwell
A multidisciplinary shape optimization tool coupling aerodynamics, structure, and performance was developed for battery powered aircraft. Utilizing high-fidelity computational fluid dynamics analysis tools and a structural wing weight tool, coupled based on the multidisciplinary feasible optimization architecture; aircraft geometry is modified in the optimization of the aircraft's range or endurance. The developed tool is applied to three geometries: a hybrid blended wing body, delta wing UAS, the ONERA M6 wing, and a modified ONERA M6 wing. First, the optimization problem is presented with the objective function, constraints, and design vector. Next, the tool's architecture and the analysis tools that are utilized are described. Finally, various optimizations are described and their results analyzed for all test subjects. Results show that less computationally expensive inviscid optimizations yield positive performance improvements using planform, airfoil, and three-dimensional degrees of freedom. From the results obtained through a series of optimizations, it is concluded that the newly developed tool is both effective at improving performance and serves as a platform ready to receive additional performance modules, further improving its computational design support potential.
BESIII Physics Data Storing and Processing on HBase and MapReduce
NASA Astrophysics Data System (ADS)
LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu
2015-12-01
In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.
10th Annual Systems Engineering Conference: Volume 2 Wednesday
2007-10-25
intelligently optimize resource performance. Self - Healing Detect hardware/software failures and reconfigure to permit continued operations. Self ...Types Wake Ice WEAPON/PLATFORM ACOUSTICS Self -Noise Radiated Noise Beam Forming Pulse Types Submarines, surface ships, and platform sensors P r o p P r o...Computing Self -Protecting Detect internal/external attacks and protect it’s resources from exploitation. Self -Optimizing Detect sub-optimal behaviors and
Moller, Arlen C.; Merchant, Gina; Conroy, David E.; West, Robert; Hekler, Eric B.; Kugler, Kari C.; Michie, Susan
2017-01-01
As more behavioral health interventions move from traditional to digital platforms, the application of evidence-based theories and techniques may be doubly advantageous. First, it can expedite digital health intervention development, improving efficacy, and increasing reach. Second, moving behavioral health interventions to digital platforms presents researchers with novel (potentially paradigm shifting) opportunities for advancing theories and techniques. In particular, the potential for technology to revolutionize theory refinement is made possible by leveraging the proliferation of “real-time” objective measurement and “big data” commonly generated and stored by digital platforms. Much more could be done to realize this potential. This paper offers proposals for better leveraging the potential advantages of digital health platforms, and reviews three of the cutting edge methods for doing so: optimization designs, dynamic systems modeling, and social network analysis. PMID:28058516
Energy efficiency analysis and optimization for mobile platforms
NASA Astrophysics Data System (ADS)
Metri, Grace Camille
The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015. Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field.
Heileman, K L; Tabrizian, M
2017-05-02
3-Dimensional cell cultures are more representative of the native environment than traditional cell cultures on flat substrates. As a result, 3-dimensional cell cultures have emerged as a very valuable model environment to study tumorigenesis, organogenesis and tissue regeneration. Many of these models encompass the formation of cell aggregates, which mimic the architecture of tumor and organ tissue. Dielectric impedance spectroscopy is a non-invasive, label free and real time technique, overcoming the drawbacks of established techniques to monitor cell aggregates. Here we introduce a platform to monitor cell aggregation in a 3-dimensional extracellular matrix using dielectric spectroscopy. The MCF10A breast epithelial cell line serves as a model for cell aggregation. The platform maintains sterile conditions during the multi-day assay while allowing continuous dielectric spectroscopy measurements. The platform geometry optimizes dielectric measurements by concentrating cells within the electrode sensing region. The cells show a characteristic dielectric response to aggregation which corroborates with finite element analysis computer simulations. By fitting the experimental dielectric spectra to the Cole-Cole equation, we demonstrated that the dispersion intensity Δε and the characteristic frequency f c are related to cell aggregate growth. In addition, microscopy can be performed directly on the platform providing information about cell position, density and morphology. This platform could yield many applications for studying the electrophysiological activity of cell aggregates.
Livermore Compiler Analysis Loop Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, R. D.
2013-03-01
LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less
Potentials and capabilities of the Extracellular Vesicle (EV) Array.
Jørgensen, Malene Møller; Bæk, Rikke; Varming, Kim
2015-01-01
Extracellular vesicles (EVs) and exosomes are difficult to enrich or purify from biofluids, hence quantification and phenotyping of these are tedious and inaccurate. The multiplexed, highly sensitive and high-throughput platform of the EV Array presented by Jørgensen et al., (J Extracell Vesicles, 2013; 2: 10) has been refined regarding the capabilities of the method for characterization and molecular profiling of EV surface markers. Here, we present an extended microarray platform to detect and phenotype plasma-derived EVs (optimized for exosomes) for up to 60 antigens without any enrichment or purification prior to analysis.
High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms
Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.
2014-01-01
Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546
OCEAN THERMAL ENERGY CONVERSION (OTEC) PROGRAMMATIC ENVIRONMENTAL ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sands, M. D.
1980-01-01
This programmatic environmental analysis is an initial assessment of OTEC technology considering development, demonstration and commercialization; it is concluded that the OTEC development program should continue because the development, demonstration, and commercialization on a single-plant deployment basis should not present significant environmental impacts. However, several areas within the OTEC program require further investigation in order to assess the potential for environmental impacts from OTEC operation, particularly in large-scale deployments and in defining alternatives to closed-cycle biofouling control: (1) Larger-scale deployments of OTEC clusters or parks require further investigations in order to assess optimal platform siting distances necessary to minimize adversemore » environmental impacts. (2) The deployment and operation of the preoperational platform (OTEC-1) and future demonstration platforms must be carefully monitored to refine environmental assessment predictions, and to provide design modifications which may mitigate or reduce environmental impacts for larger-scale operations. These platforms will provide a valuable opportunity to fully evaluate the intake and discharge configurations, biofouling control methods, and both short-term and long-term environmental effects associated with platform operations. (3) Successful development of OTEC technology to use the maximal resource capabilities and to minimize environmental effects will require a concerted environmental management program, encompassing many different disciplines and environmental specialties.« less
Droplet Microarray Based on Superhydrophobic-Superhydrophilic Patterns for Single Cell Analysis.
Jogia, Gabriella E; Tronser, Tina; Popova, Anna A; Levkin, Pavel A
2016-12-09
Single-cell analysis provides fundamental information on individual cell response to different environmental cues and is a growing interest in cancer and stem cell research. However, current existing methods are still facing challenges in performing such analysis in a high-throughput manner whilst being cost-effective. Here we established the Droplet Microarray (DMA) as a miniaturized screening platform for high-throughput single-cell analysis. Using the method of limited dilution and varying cell density and seeding time, we optimized the distribution of single cells on the DMA. We established culturing conditions for single cells in individual droplets on DMA obtaining the survival of nearly 100% of single cells and doubling time of single cells comparable with that of cells cultured in bulk cell population using conventional methods. Our results demonstrate that the DMA is a suitable platform for single-cell analysis, which carries a number of advantages compared with existing technologies allowing for treatment, staining and spot-to-spot analysis of single cells over time using conventional analysis methods such as microscopy.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modern Computational Techniques for the HMMER Sequence Analysis
2013-01-01
This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944
Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures
NASA Astrophysics Data System (ADS)
Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.
2016-12-01
The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.
An Optimization Framework for Dynamic Hybrid Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis
A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less
Program optimizations: The interplay between power, performance, and energy
Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...
2016-05-16
Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less
Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen
2016-09-02
Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
An Efficient Reachability Analysis Algorithm
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2008-01-01
A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.
Bernacka-Wojcik, Iwona; Águas, Hugo; Carlos, Fabio Ferreira; Lopes, Paulo; Wojcik, Pawel Jerzy; Costa, Mafalda Nascimento; Veigas, Bruno; Igreja, Rui; Fortunato, Elvira; Baptista, Pedro Viana; Martins, Rodrigo
2015-06-01
The use of microfluidics platforms combined with the optimal optical properties of gold nanoparticles has found plenty of application in molecular biosensing. This paper describes a bio-microfluidic platform coupled to a non-cross-linking colorimetric gold nanoprobe assay to detect a single nucleotide polymorphism associated with increased risk of obesity fat-mass and obesity-associated (FTO) rs9939609 (Carlos et al., 2014). The system enabled significant discrimination between positive and negative assays using a target DNA concentration of 5 ng/µL below the limit of detection of the conventionally used microplate reader (i.e., 15 ng/µL) with 10 times lower solution volume (i.e., 3 µL). A set of optimization of our previously reported bio-microfluidic platform (Bernacka-Wojcik et al., 2013) resulted in a 160% improvement of colorimetric analysis results. Incorporation of planar microlenses increased 6 times signal-to-loss ratio reaching the output optical fiber improving by 34% the colorimetric analysis of gold nanoparticles, while the implementation of an optoelectronic acquisition system yielded increased accuracy and reduced noise. The microfluidic chip was also integrated with a miniature fiber spectrometer to analyze the assays' colorimetric changes and also the LEDs transmission spectra when illuminating through various solutions. Furthermore, by coupling an optical microscope to a digital camera with a long exposure time (30 s), we could visualise the different scatter intensities of gold nanoparticles within channels following salt addition. These intensities correlate well to the expected difference in aggregation between FTO positive (none to small aggregates) and negative samples (large aggregates). © 2015 Wiley Periodicals, Inc.
ADMS State of the Industry and Gap Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.
2016-03-31
An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less
Nejdl, Lukas; Kudr, Jiri; Cihalova, Kristyna; Chudobova, Dagmar; Zurek, Michal; Zalud, Ludek; Kopecny, Lukas; Burian, Frantisek; Ruttkay-Nedecky, Branislav; Krizkova, Sona; Konecna, Marie; Hynek, David; Kopel, Pavel; Prasek, Jan; Adam, Vojtech; Kizek, Rene
2014-08-01
Remote-controlled robotic systems are being used for analysis of various types of analytes in hostile environment including those called extraterrestrial. The aim of our study was to develop a remote-controlled robotic platform (ORPHEUS-HOPE) for bacterial detection. For the platform ORPHEUS-HOPE a 3D printed flow chip was designed and created with a culture chamber with volume 600 μL. The flow rate was optimized to 500 μL/min. The chip was tested primarily for detection of 1-naphthol by differential pulse voltammetry with detection limit (S/N = 3) as 20 nM. Further, the way how to capture bacteria was optimized. To capture bacterial cells (Staphylococcus aureus), maghemite nanoparticles (1 mg/mL) were prepared and modified with collagen, glucose, graphene, gold, hyaluronic acid, and graphene with gold or graphene with glucose (20 mg/mL). The most up to 50% of the bacteria were captured by graphene nanoparticles modified with glucose. The detection limit of the whole assay, which included capturing of bacteria and their detection under remote control operation, was estimated as 30 bacteria per μL. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Jiang, Qi-Jie; Jin, Mao-Zhu; Ren, Pei-Yu
2017-04-01
How to optimize agro-product supply chain to promote its operating efficiency so as to enhance the competitiveness of regional agricultural products has posed a problem to academic circles, business circles and governments of various levels. One way to solve this problem is to introduce an information platform into the supply chain, which this essay focuses on. Firstly, a review of existing research findings concerning the agro-product competitiveness, agro-product supply chain (ASC) and information platform was given. Secondly, we constructed a mathematical model to analyze the impact of information platform on the bullwhip effect in ASC. Thirdly, another mathematical model was constructed to help compare and analyze the impact of information platform on information acquisition of members in ASC. The research results show that the implantation of information platform can mitigate the bullwhip effect in ASC, and members can determine order amount or production more close to the actual market demand. And also the information platform can reduce the time for members in ASC to get information from other members. Besides, information platform can help ASC to alleviate information asymmetry among upstream and downstream members. Furthermore, researches about the operating mechanism and pattern, technical feature and running structure of the information platform, along with their impacts on agro-product supply chain and the competitiveness of agricultural products need to be advanced.
Morphology and FT-IR analysis of anti-pollution flashover coatings with adding nano SiO2 particles
NASA Astrophysics Data System (ADS)
Guo, Kai; Du, Yishu; Wu, Yaping; Mi, Xuchun; Li, Xingeng; Chen, Suhong
2017-12-01
By adding nano SiO2 particles, an enhanced K-PRTV anti-pollution flashover coating had been prepared. Optical profile meter (GT-K), atomic force microscopy (AFM), infrared spectrometer (FT-IR) and EDS characterization were carried out on the coating surface analysis. Those results has been use to optimize the further design and platform of the enhanced K-PRTV pollution flash coating experiment. It is also to improve the plan formulation, formulation optimization and preparation of the hydrophobic modified K-PRTV which is based on anti-pollution coating experiment. More importantly, the anti-pollution flashover K-PRTV coating with super hydrophobic modified is the great significance for K-PRTV coating.
Graph processing platforms at scale: practices and experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C
2015-01-01
Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less
European research platform IPANEMA at the SOLEIL synchrotron for ancient and historical materials.
Bertrand, L; Languille, M-A; Cohen, S X; Robinet, L; Gervais, C; Leroy, S; Bernard, D; Le Pennec, E; Josse, W; Doucet, J; Schöder, S
2011-09-01
IPANEMA, a research platform devoted to ancient and historical materials (archaeology, cultural heritage, palaeontology and past environments), is currently being set up at the synchrotron facility SOLEIL (Saint-Aubin, France; SOLEIL opened to users in January 2008). The new platform is open to French, European and international users. The activities of the platform are centred on two main fields: increased support to synchrotron projects on ancient materials and methodological research. The IPANEMA team currently occupies temporary premises at SOLEIL, but the platform comprises construction of a new building that will comply with conservation and environmental standards and of a hard X-ray imaging beamline today in its conceptual design phase, named PUMA. Since 2008, the team has supported synchrotron works at SOLEIL and at European synchrotron facilities on a range of topics including pigment degradation in paintings, composition of musical instrument varnishes, and provenancing of medieval archaeological ferrous artefacts. Once the platform is fully operational, user support will primarily take place within medium-term research projects for `hosted' scientists, PhDs and post-docs. IPANEMA methodological research is focused on advanced two-dimensional/three-dimensional imaging and spectroscopy and statistical image analysis, both optimized for ancient materials.
BESIU Physical Analysis on Hadoop Platform
NASA Astrophysics Data System (ADS)
Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing
2014-06-01
In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.
Experimental verification of Space Platform battery discharger design optimization
NASA Astrophysics Data System (ADS)
Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.
The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.
Johnsen Lind, Andreas; Helge Johnsen, Bjorn; Hill, Labarron K; Sollers Iii, John J; Thayer, Julian F
2011-01-01
The aim of the present manuscript is to present a user-friendly and flexible platform for transforming Kubios HRV output files to an .xls-file format, used by MS Excel. The program utilizes either native or bundled Java and is platform-independent and mobile. This means that it can run without being installed on a computer. It also has an option of continuous transferring of data indicating that it can run in the background while Kubios produces output files. The program checks for changes in the file structure and automatically updates the .xls- output file.
Experimental verification of Space Platform battery discharger design optimization
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.
1991-01-01
The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.
NASA Astrophysics Data System (ADS)
Sahbaee, Pooyan; Abadi, Ehsan; Sanders, Jeremiah; Becchetti, Marc; Zhang, Yakun; Agasthya, Greeshma; Segars, Paul; Samei, Ehsan
2016-03-01
The purpose of this study was to substantiate the interdependency of image quality, radiation dose, and contrast material dose in CT towards the patient-specific optimization of the imaging protocols. The study deployed two phantom platforms. First, a variable sized phantom containing an iodinated insert was imaged on a representative CT scanner at multiple CTDI values. The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast to noise ratio (CNR), was calculated for different iodine-concentration levels. Second, the analysis was extended to a recently developed suit of 58 virtual human models (5D-XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was "imaged" using a CT simulation platform. 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The Sensitivity of Ratio (SR), defined as ratio of change in iodine-concentration versus dose to yield a constant change in CNR was calculated and compared at high and low radiation dose for both phantom platforms. The results show that sensitivity of CNR to iodine concentration is larger at high radiation dose (up to 73%). The SR results were highly affected by radiation dose metric; CTDI or organ dose. Furthermore, results showed that the presence of contrast material could have a profound impact on optimization results (up to 45%).
VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.
Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon
2013-01-01
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.
VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies
Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon
2013-01-01
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431
OnCampus: a mobile platform towards a smart campus.
Dong, Xin; Kong, Xiangjie; Zhang, Fulin; Chen, Zhen; Kang, Jialiang
2016-01-01
An increasing number of researchers and practitioners are working to develop smart cities. Considerable attention has been paid to the college campus as it is an important component of smart cities. Consequently, the question of how to construct a smart campus has become a topical one. Here, we propose a scheme that can facilitate the construction of a smart and friendly campus. We primarily focus on three aspects of smart campuses. These are: the formation of social circles based on interests mining, the provision of educational guidance based on emotion analysis of information posted on a platform, and development of a secondary trading platform aimed at optimizing the allocation of campus resources. Based on these objectives, we designed and implemented a mobile platform called OnCampus as the first step towards the development of a smart campus that has been introduced in some colleges. We found that OnCampus could successfully accomplish the three above mentioned functions of a smart campus.
Ocean Thermal Energy Conversion (OTEC) Programmatic Environmental Analysis--Appendices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Authors, Various
1980-01-01
The programmatic environmental analysis is an initial assessment of Ocean Thermal Energy Conversion (OTEC) technology considering development, demonstration and commercialization. It is concluded that the OTEC development program should continue because the development, demonstration, and commercialization on a single-plant deployment basis should not present significant environmental impacts. However, several areas within the OTEC program require further investigation in order to assess the potential for environmental impacts from OTEC operation, particularly in large-scale deployments and in defining alternatives to closed-cycle biofouling control: (1) Larger-scale deployments of OTEC clusters or parks require further investigations in order to assess optimal platform siting distancesmore » necessary to minimize adverse environmental impacts. (2) The deployment and operation of the preoperational platform (OTEC-1) and future demonstration platforms must be carefully monitored to refine environmental assessment predictions, and to provide design modifications which may mitigate or reduce environmental impacts for larger-scale operations. These platforms will provide a valuable opportunity to fully evaluate the intake and discharge configurations, biofouling control methods, and both short-term and long-term environmental effects associated with platform operations. (3) Successful development of OTEC technology to use the maximal resource capabilities and to minimize environmental effects will require a concerted environmental management program, encompassing many different disciplines and environmental specialties. This volume contains these appendices: Appendix A -- Deployment Scenario; Appendix B -- OTEC Regional Characterization; and Appendix C -- Impact and Related Calculations.« less
Optimal design and experimental analyses of a new micro-vibration control payload-platform
NASA Astrophysics Data System (ADS)
Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen
2016-07-01
This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.
AITSO: A Tool for Spatial Optimization Based on Artificial Immune Systems
Zhao, Xiang; Liu, Yaolin; Liu, Dianfeng; Ma, Xiaoya
2015-01-01
A great challenge facing geocomputation and spatial analysis is spatial optimization, given that it involves various high-dimensional, nonlinear, and complicated relationships. Many efforts have been made with regard to this specific issue, and the strong ability of artificial immune system algorithms has been proven in previous studies. However, user-friendly professional software is still unavailable, which is a great impediment to the popularity of artificial immune systems. This paper describes a free, universal tool, named AITSO, which is capable of solving various optimization problems. It provides a series of standard application programming interfaces (APIs) which can (1) assist researchers in the development of their own problem-specific application plugins to solve practical problems and (2) allow the implementation of some advanced immune operators into the platform to improve the performance of an algorithm. As an integrated, flexible, and convenient tool, AITSO contributes to knowledge sharing and practical problem solving. It is therefore believed that it will advance the development and popularity of spatial optimization in geocomputation and spatial analysis. PMID:25678911
USDA-ARS?s Scientific Manuscript database
The stem cell walls of alfalfa [Medicago sativa (L.) ssp. sativa] genotype 252 have high cellulose and lignin concentrations, while stem cell walls of genotype 1283 have low cellulose and lignin concentrations. The GeneChip® Medicago Genome Array, developed for Medicago truncatula, is a suitable pla...
Harnessing the power of emerging petascale platforms
NASA Astrophysics Data System (ADS)
Mellor-Crummey, John
2007-07-01
As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 503 domain.
Spiral: Automated Computing for Linear Transforms
NASA Astrophysics Data System (ADS)
Püschel, Markus
2010-09-01
Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.
Feasibility of Floating Platform Systems for Wind Turbines: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musial, W.; Butterfield, S.; Boone, A.
This paper provides a general technical description of several types of floating platforms for wind turbines. Platform topologies are classified into multiple- or single-turbine floaters and by mooring method. Platforms using catenary mooring systems are contrasted to vertical mooring systems and the advantages and disadvantages are discussed. Specific anchor types are described in detail. A rough cost comparison is performed for two different platform architectures using a generic 5-MW wind turbine. One platform is a Dutch study of a tri-floater platform using a catenary mooring system, and the other is a mono-column tension-leg platform developed at the National Renewable Energymore » Laboratory. Cost estimates showed that single unit production cost is $7.1 M for the Dutch tri-floater, and $6.5 M for the NREL TLP concept. However, value engineering, multiple unit series production, and platform/turbine system optimization can lower the unit platform costs to $4.26 M and $2.88 M, respectively, with significant potential to reduce cost further with system optimization. These foundation costs are within the range necessary to bring the cost of energy down to the DOE target range of $0.05/kWh for large-scale deployment of offshore floating wind turbines.« less
2012-01-01
Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector) which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput by parallelization and automation. Due to improved statistics by replicate cultivations, automated downstream analysis, and scalable process information, this setup has superior performance compared to standard microtiter plate cultivation. PMID:23113930
The Trip Itinerary Optimization Platform: A Framework for Personalized Travel Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwasnik, Ted; Carmichael, Scott P.; Arent, Douglas J
The New Concepts Incubator team at the National Renewable Energy Laboratory (NREL) developed a three-stage online platform for travel diary collection, personal travel plan optimization and travel itinerary visualization. In the first stage, users provide a travel diary for the previous day through an interactive map and calendar interface and survey for travel attitudes and behaviors. One or more days later, users are invited via email to engage in a second stage where they view a personal mobility dashboard displaying recommended travel itineraries generated from a novel framework that optimizes travel outcomes over a sequence of interrelated trips. A weekmore » or more after viewing these recommended travel itineraries on the dashboard, users are emailed again to engage in a third stage where they complete a final survey about travel attitudes and behaviors. A usability study of the platform conducted online showed that, in general, users found the system valuable for informing their travel decisions. A total of 274 individuals were recruited through Amazon Mechanical Turk, an online survey platform, to participate in a transportation study using this platform. On average, the platform distilled 65 feasible travel plans per individual into two recommended itineraries, each optimal according to one or more outcomes and dependent on the fixed times and locations from the travel diary. For 45 percent of users, the trip recommendation algorithm returned only a single, typically automobile-centric, itinerary because there were no other viable alternative transportation modes available. Platform users generally agreed that the dashboard was enjoyable and easy to use, and that it would be a helpful tool in adopting new travel behaviors. Users generally agreed most that the time, cost and user preferred recommendations 'made sense' to them, and were most willing to implement these itineraries. Platform users typically expressed low willingness to try the carbon and calories optimized itineraries. Of the platform users who viewed the dashboard, 13 percent reported changing their travel behavior, most adopting the time, calories or carbon optimized itineraries. While the algorithm incorporates a wealth of travel data obtained from online APIs pertaining to a travelers route such as historic traffic condition data, public transit time-tables, and bike path routes, open-ended responses from users expressed an interest in the integration of even more fine-grained traffic data and the ability to dynamically model the effect of changes in travel times. Users also commonly expressed concerns over the safety of walking and biking recommendations. Responses indicate that more information about the amenities available to cyclists and pedestrians (sidewalks, shade from trees, access to food) and routes that avoid areas of perceived elevated danger would reduce barriers to implementing these recommendations. More accurate representations of personal vehicle trips (based on vehicle make and model, implications of parking) and the identification of routes that optimize caloric intensity (seeking out elevation changes or longer walks to public transit) are promising avenues for future research.« less
NASA Astrophysics Data System (ADS)
Sono, Tleyane J.; Riziotis, Christos; Mailis, Sakellaris; Eason, Robert W.
2017-09-01
Fabrication capabilities of high optical quality hexagonal superstructures by chemical etching of inverted ferroelectric domains in lithium niobate platform suggests a route for efficient implementation of compact hexagonal microcavities. Such nonlinear optical hexagonal micro-resonators are proposed as a platform for second harmonic generation (SHG) by the combined mechanisms of total internal reflection (TIR) and quasi-phase-matching (QPM). The proposed scheme for SHG via TIR-QPM in a hexagonal microcavity can improve the efficiency and also the compactness of SHG devices compared to traditional linear-type based devices. A simple theoretical model based on six-bounce trajectory and phase matching conditions was capable for obtaining the optimal cavity size. Furthermore numerical simulation results based on finite difference time domain beam propagation method analysis confirmed the solutions obtained by demonstrating resonant operation of the microcavity for the second harmonic wave produced by TIR-QPM. Design aspects, optimization issues and characteristics of the proposed nonlinear device are presented.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Microfluidic platform for optimization of crystallization conditions
NASA Astrophysics Data System (ADS)
Zhang, Shuheng; Gerard, Charline J. J.; Ikni, Aziza; Ferry, Gilles; Vuillard, Laurent M.; Boutin, Jean A.; Ferte, Nathalie; Grossier, Romain; Candoni, Nadine; Veesler, Stéphane
2017-08-01
We describe a universal, high-throughput droplet-based microfluidic platform for crystallization. It is suitable for a multitude of applications, due to its flexibility, ease of use, compatibility with all solvents and low cost. The platform offers four modular functions: droplet formation, on-line characterization, incubation and observation. We use it to generate droplet arrays with a concentration gradient in continuous long tubing, without using surfactant. We control droplet properties (size, frequency and spacing) in long tubing by using hydrodynamic empirical relations. We measure droplet chemical composition using both an off-line and a real-time on-line method. Applying this platform to a complicated chemical environment, membrane proteins, we successfully handle crystallization, suggesting that the platform is likely to perform well in other circumstances. We validate the platform for fine-gradient screening and optimization of crystallization conditions. Additional on-line detection methods may well be integrated into this platform in the future, for instance, an on-line diffraction technique. We believe this method could find applications in fields such as fluid interaction engineering, live cell study and enzyme kinetics.
NASA Astrophysics Data System (ADS)
Aminfar, Ali; Mojtahedi, Alireza; Ahmadi, Hamid; Aminfar, Mohammad Hossain
2017-06-01
Among numerous offshore structures used in oil extraction, jacket platforms are still the most favorable ones in shallow waters. In such structures, log piles are used to pin the substructure of the platform to the seabed. The pile's geometrical and geotechnical properties are considered as the main parameters in designing these structures. In this study, ANSYS was used as the FE modeling software to study the geometrical and geotechnical properties of the offshore piles and their effects on supporting jacket platforms. For this purpose, the FE analysis has been done to provide the preliminary data for the fuzzy-logic post-process. The resulting data were implemented to create Fuzzy Inference System (FIS) classifications. The resultant data of the sensitivity analysis suggested that the orientation degree is the main factor in the pile's geometrical behavior because piles which had the optimal operational degree of about 5° are more sustained. Finally, the results showed that the related fuzzified data supported the FE model and provided an insight for extended offshore pile designs.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1993-01-01
A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.
Open source Modeling and optimization tools for Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, S.
Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.; Pin, F.G.
This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin's maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.; Pin, F.G.
This paper addresses the problem of time-optional motions for a mobile platform in a planar environment. The platform has two non-steerable independently driven wheels. The overall mission of the robot is expressed in terms of a sequence of via points at which the platform must be at rest in a given configuration (position and orientation). The objective is to plan time-optimal trajectories between these configurations assuming an unobstructed environment. Using Pontryagin`s maximum principle (PMP), we formally demonstrate that all time optimal motions of the platform for this problem occur for bang-bang controls on the wheels (at each instant, the accelerationmore » on each wheel is either at its upper or lower limit). The PMP, however, only provides necessary conditions for time optimality. To find the time optimal robot trajectories, we first parameterize the bang-bang trajectories using the switch times on the wheels (the times at which the wheel accelerations change sign). With this parameterization, we can fully search the robot trajectory space and find the switch times that will produce particular paths to a desired final configuration of the platform. We show numerically that robot trajectories with three switch times (two on one wheel, one on the other) can reach any position, while trajectories with four switch times can reach any configuration. By numerical comparison with other trajectories involving similar or greater numbers of switch times, we then identify the sets of time-optimal trajectories. These are uniquely defined using ranges of the parameters, and consist of subsets of trajectories with three switch times for the problem when the final orientation of the robot is not specified, and four switch times when a full final configuration is specified. We conclude with a description of the use of the method for trajectory planning for one of our robots.« less
Ding, Jianhua; Yang, Shuiping; Liang, Dapeng; Chen, Huanwen; Wu, Zhuanzhang; Zhang, Lili; Ren, Yulin
2009-10-01
In metabolomics studies and clinical diagnosis, interest is increasing in the rapid analysis of exhaled breath. In vivo breath analysis offers a unique, unobtrusive, non-invasive method of investigating human metabolism. To analyze breath in vivo, we constructed a novel platform of extractive electrospray ionization (EESI) ion trap mass spectrometry (ITMS) using a home-made EESI source coupled to a linear trap quadrupole mass spectrometer. A reference compound (authentic n-octyl amine) was used to evaluate effects of systematically varying selected characteristics of the EESI source on signal intensity. Under the optimized working conditions, metabolic changes of human bodies were in vivo followed by performing rapid breath analysis using the multi-stage EESI-ITMS tandem mass spectrometry platform. For nicotine, a limit of determination was found to be 0.05 fg mL(-1) (S/N = 3, RSD = 5.0 %, n = 10) for nicotine in aerosol standard samples; the dynamic response range was from 0.0155 pg mL(-1) to 155 pg mL(-1). The concentration of nicotine in the exhaled breath of a regular smoker was in vivo determined to be 5.8 pg mL(-1), without any sample pre-treatment. Our results show that EESI-ITMS is a powerful analytical platform to provide high sensitivity, high specificity and high throughput for semi-quantitative analysis of complex samples in life science, particularly for in vivo metabolomics studies.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Collaborative Platform for DFM
2007-12-20
generation litho hotspot checkers have also been implemented in automated hotspot fixers that can automatically fix designs by making small changes...processing side (ex. new CMP models, etch models, litho models) and on the circuit side (ex. Process aware circuit analysis or yield optimization...Since final gate CD is a function of not only litho , but Post Exposure Bake, ashing, and etch, the processing module can be augmented with more
Roadmap for Agriculture Biomass Feedstock Supply in the United States
2003-11-01
the high-priority areas for biomass supply forecasts and analysis . Top research needs in sustainability and plant sciences areas are listed in the...petroleum. Lignocellulosic biomass is the nonstarch, fibrous part of plant material that is inherently moist and lightweight. The sugar platform...include: •“Biotechnology, genetics and plant physiology for improved feedstocks, • Optimize agronomic practices, including land use availability and soil
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2018-01-23
Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.
Mahler, Cornelia; Seidling, Hanna Marita; Stützle, Marion; Ose, Dominik; Baudendistel, Ines; Wensing, Michel; Szecsenyi, Joachim
2018-01-01
Background Information technology tools such as shared patient-centered, Web-based medication platforms hold promise to support safe medication use by strengthening patient participation, enhancing patients’ knowledge, helping patients to improve self-management of their medications, and improving communication on medications among patients and health care professionals (HCPs). However, the uptake of such platforms remains a challenge also due to inadequate user involvement in the development process. Employing a user-centered design (UCD) approach is therefore critical to ensure that user’ adoption is optimal. Objective The purpose of this study was to identify what patients with type 2 diabetes mellitus (T2DM) and their HCPs regard necessary requirements in terms of functionalities and usability of a shared patient-centered, Web-based medication platform for patients with T2DM. Methods This qualitative study included focus groups with purposeful samples of patients with T2DM (n=25), general practitioners (n=13), and health care assistants (n=10) recruited from regional health care settings in southwestern Germany. In total, 8 semistructured focus groups were conducted. Sessions were audio- and video-recorded, transcribed verbatim, and subjected to a computer-aided qualitative content analysis. Results Appropriate security and access methods, supported data entry, printing, and sending information electronically, and tracking medication history were perceived as the essential functionalities. Although patients wanted automatic interaction checks and safety alerts, HCPs on the contrary were concerned that unspecific alerts confuse patients and lead to nonadherence. Furthermore, HCPs were opposed to patients’ ability to withhold or restrict access to information in the platform. To optimize usability, there was consensus among participants to display information in a structured, chronological format, to provide information in lay language, to use visual aids and customize information content, and align the platform to users’ workflow. Conclusions By employing a UCD, this study provides insight into the desired functionalities and usability of patients and HCPs regarding a shared patient-centered, Web-based medication platform, thus increasing the likelihood to achieve a functional and useful system. Substantial and ongoing engagement by all intended user groups is necessary to reconcile differences in requirements of patients and HCPs, especially regarding medication safety alerts and access control. Moreover, effective training of patients and HCPs on medication self-management (support) and optimal use of the tool will be a prerequisite to unfold the platform’s full potential. PMID:29588269
NASA Astrophysics Data System (ADS)
Şahingil, Mehmet C.; Aslan, Murat Š.
2013-10-01
Infrared guided missile seekers utilizing pulse width modulation in target tracking is one of the threats against air platforms. To be able to achieve a "soft-kill" protection of own platform against these type of threats, one needs to examine carefully the seeker operating principle with its special electronic counter-counter measure (ECCM) capability. One of the cost-effective ways of soft kill protection is to use flare decoys in accordance with an optimized dispensing program. Such an optimization requires a good understanding of the threat seeker, capabilities of the air platform and engagement scenario information between them. Modeling and simulation is very powerful tool to achieve a valuable insight and understand the underlying phenomenology. A careful interpretation of simulation results is crucial to infer valuable conclusions from the data. In such an interpretation there are lots of factors (features) which affect the results. Therefore, powerful statistical tools and pattern recognition algorithms are of special interest in the analysis. In this paper, we show how self-organizing maps (SOMs), which is one of those powerful tools, can be used in analyzing the effectiveness of various flare dispensing programs against a PWM seeker. We perform several Monte Carlo runs for a typical engagement scenario in a MATLAB-based simulation environment. In each run, we randomly change the flare dispending program and obtain corresponding class: "successful" or "unsuccessful", depending on whether the corresponding flare dispensing program deceives the seeker or not, respectively. Then, in the analysis phase, we use SOMs to interpret and visualize the results.
A Platform to Optimize the Field Emission Properties of Carbon Nanotube Based Fibers (Postprint)
2016-08-25
University of Dayton Research Institute 300 College Park Ave., Dayton, OH 45469 6) AFRL /RD, Kirtland AFB, Albuquerque, NM 8717... AFRL -RX-WP-JA-2017-0351 A PLATFORM TO OPTIMIZE THE FIELD EMISSION PROPERTIES OF CARBON-NANOTUBE-BASED FIBERS (POSTPRINT) Steven B...Fairchild AFRL /RX M. Cahay and W. Zhu University of Cincinnati K.L. Jensen Naval Research Laboratory R.G. Forbes University of Surrey
DART-MS analysis of inorganic explosives using high temperature thermal desorption†‡
Sisco, Edward; Staymates, Matthew; Gillen, Greg
2018-01-01
An ambient mass spectrometry (MS) platform coupling resistive Joule heating thermal desorption (JHTD) and direct analysis in real time (DART) was implemented for the analysis of inorganic nitrite, nitrate, chlorate, and perchlorate salts. The resistive heating component generated discrete and rapid heating ramps and elevated temperatures, up to approximately 400 °C s−1 and 750 °C, by passing a few amperes of DC current through a nichrome wire. JHTD enhanced the utility and capabilities of traditional DART-MS for the trace detection of previously difficult to detect inorganic compounds. A partial factorial design of experiments (DOE) was implemented for the systematic evaluation of five system parameters. A base set of conditions for JHTD-DART-MS was derived from this evaluation, demonstrating sensitive detection of a range of inorganic oxidizer salts, down to single nanogram levels. DOE also identified JHTD filament current and in-source collision induced dissociation (CID) energy as inducing the greatest effect on system response. Tuning of JHTD current provided a method for controlling the relative degrees of thermal desorption and thermal decomposition. Furthermore, in-source CID provided manipulation of adduct and cluster fragmentation, optimizing the detection of molecular anion species. Finally, the differential thermal desorption nature of the JHTD-DART platform demonstrated efficient desorption and detection of organic and inorganic explosive mixtures, with each desorbing at its respective optimal temperature. PMID:29651308
MC-GenomeKey: a multicloud system for the detection and annotation of genomic variants.
Elshazly, Hatem; Souilmi, Yassine; Tonellato, Peter J; Wall, Dennis P; Abouelhoda, Mohamed
2017-01-20
Next Generation Genome sequencing techniques became affordable for massive sequencing efforts devoted to clinical characterization of human diseases. However, the cost of providing cloud-based data analysis of the mounting datasets remains a concerning bottleneck for providing cost-effective clinical services. To address this computational problem, it is important to optimize the variant analysis workflow and the used analysis tools to reduce the overall computational processing time, and concomitantly reduce the processing cost. Furthermore, it is important to capitalize on the use of the recent development in the cloud computing market, which have witnessed more providers competing in terms of products and prices. In this paper, we present a new package called MC-GenomeKey (Multi-Cloud GenomeKey) that efficiently executes the variant analysis workflow for detecting and annotating mutations using cloud resources from different commercial cloud providers. Our package supports Amazon, Google, and Azure clouds, as well as, any other cloud platform based on OpenStack. Our package allows different scenarios of execution with different levels of sophistication, up to the one where a workflow can be executed using a cluster whose nodes come from different clouds. MC-GenomeKey also supports scenarios to exploit the spot instance model of Amazon in combination with the use of other cloud platforms to provide significant cost reduction. To the best of our knowledge, this is the first solution that optimizes the execution of the workflow using computational resources from different cloud providers. MC-GenomeKey provides an efficient multicloud based solution to detect and annotate mutations. The package can run in different commercial cloud platforms, which enables the user to seize the best offers. The package also provides a reliable means to make use of the low-cost spot instance model of Amazon, as it provides an efficient solution to the sudden termination of spot machines as a result of a sudden price increase. The package has a web-interface and it is available for free for academic use.
Bernhard, Gerda; Mahler, Cornelia; Seidling, Hanna Marita; Stützle, Marion; Ose, Dominik; Baudendistel, Ines; Wensing, Michel; Szecsenyi, Joachim
2018-03-27
Information technology tools such as shared patient-centered, Web-based medication platforms hold promise to support safe medication use by strengthening patient participation, enhancing patients' knowledge, helping patients to improve self-management of their medications, and improving communication on medications among patients and health care professionals (HCPs). However, the uptake of such platforms remains a challenge also due to inadequate user involvement in the development process. Employing a user-centered design (UCD) approach is therefore critical to ensure that user' adoption is optimal. The purpose of this study was to identify what patients with type 2 diabetes mellitus (T2DM) and their HCPs regard necessary requirements in terms of functionalities and usability of a shared patient-centered, Web-based medication platform for patients with T2DM. This qualitative study included focus groups with purposeful samples of patients with T2DM (n=25), general practitioners (n=13), and health care assistants (n=10) recruited from regional health care settings in southwestern Germany. In total, 8 semistructured focus groups were conducted. Sessions were audio- and video-recorded, transcribed verbatim, and subjected to a computer-aided qualitative content analysis. Appropriate security and access methods, supported data entry, printing, and sending information electronically, and tracking medication history were perceived as the essential functionalities. Although patients wanted automatic interaction checks and safety alerts, HCPs on the contrary were concerned that unspecific alerts confuse patients and lead to nonadherence. Furthermore, HCPs were opposed to patients' ability to withhold or restrict access to information in the platform. To optimize usability, there was consensus among participants to display information in a structured, chronological format, to provide information in lay language, to use visual aids and customize information content, and align the platform to users' workflow. By employing a UCD, this study provides insight into the desired functionalities and usability of patients and HCPs regarding a shared patient-centered, Web-based medication platform, thus increasing the likelihood to achieve a functional and useful system. Substantial and ongoing engagement by all intended user groups is necessary to reconcile differences in requirements of patients and HCPs, especially regarding medication safety alerts and access control. Moreover, effective training of patients and HCPs on medication self-management (support) and optimal use of the tool will be a prerequisite to unfold the platform's full potential. ©Gerda Bernhard, Cornelia Mahler, Hanna Marita Seidling, Marion Stützle, Dominik Ose, Ines Baudendistel, Michel Wensing, Joachim Szecsenyi. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.03.2018.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Based on the rainfall system platform raindrops research and analysis of pressure loss
NASA Astrophysics Data System (ADS)
Cao, Gang; Sun, Jian
2018-01-01
With the rapid development of China’s military career, land, sea and air force all services and equipment of modern equipment need to be in the rain test, and verify its might suffer during transportation, storage or use a different environment temperature lower water or use underwater, the water is derived from the heavy rain, the wind and rain, sprinkler system, splash water, water wheel, a violent shock waves or use underwater, etcTest the product performance and quality, under the condition of rainfall system platform in the process of development, how to control the raindrops pressure loss becomes the key to whether the system can simulate the real rainfall [1], this paper is according to the rainfall intensity, nozzle flow resistance, meet water flow of rain pressure loss calculation and analysis, and system arrangement of the optimal solution of rainfall is obtained [2].
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
Architecture Analysis of Wireless Power Transmission for Lunar Outposts
2015-09-01
through his work on wireless communication using radio wave propagation for both transmitting and receiving high frequency electricity using a focusing...Administration nm nanometers NRC National Research Council PGT platform generic technologies PMAD power management and distribution RF radio frequency xiv...GHz (Marzwell 2008). While the slot antenna can handle frequencies between 70 GHz and 150 GHz, it has been optimized for 94 GHz and has a radio
A Novel Platform for Evaluating the Environmental Impacts on Bacterial Cellulose Production.
Basu, Anindya; Vadanan, Sundaravadanam Vishnu; Lim, Sierin
2018-04-10
Bacterial cellulose (BC) is a biocompatible material with versatile applications. However, its large-scale production is challenged by the limited biological knowledge of the bacteria. The advent of synthetic biology has lead the way to the development of BC producing microbes as a novel chassis. Hence, investigation on optimal growth conditions for BC production and understanding of the fundamental biological processes are imperative. In this study, we report a novel analytical platform that can be used for studying the biology and optimizing growth conditions of cellulose producing bacteria. The platform is based on surface growth pattern of the organism and allows us to confirm that cellulose fibrils produced by the bacteria play a pivotal role towards their chemotaxis. The platform efficiently determines the impacts of different growth conditions on cellulose production and is translatable to static culture conditions. The analytical platform provides a means for fundamental biological studies of bacteria chemotaxis as well as systematic approach towards rational design and development of scalable bioprocessing strategies for industrial production of bacterial cellulose.
Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P
2015-05-22
A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.
Multi-source Geospatial Data Analysis with Google Earth Engine
NASA Astrophysics Data System (ADS)
Erickson, T.
2014-12-01
The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org
Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J
2014-02-01
A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.
Ferries, Samantha; Perkins, Simon; Brownridge, Philip J; Campbell, Amy; Eyers, Patrick A; Jones, Andrew R; Eyers, Claire E
2017-09-01
Confident identification of sites of protein phosphorylation by mass spectrometry (MS) is essential to advance understanding of phosphorylation-mediated signaling events. However, the development of novel instrumentation requires that methods for MS data acquisition and its interrogation be evaluated and optimized for high-throughput phosphoproteomics. Here we compare and contrast eight MS acquisition methods on the novel tribrid Orbitrap Fusion MS platform using both a synthetic phosphopeptide library and a complex phosphopeptide-enriched cell lysate. In addition to evaluating multiple fragmentation regimes (HCD, EThcD, and neutral-loss-triggered ET(ca/hc)D) and analyzers for MS/MS (orbitrap (OT) versus ion trap (IT)), we also compare two commonly used bioinformatics platforms, Andromeda with PTM-score, and MASCOT with ptmRS for confident phosphopeptide identification and, crucially, phosphosite localization. Our findings demonstrate that optimal phosphosite identification is achieved using HCD fragmentation and high-resolution orbitrap-based MS/MS analysis, employing MASCOT/ptmRS for data interrogation. Although EThcD is optimal for confident site localization for a given PSM, the increased duty cycle compared with HCD compromises the numbers of phosphosites identified. Finally, our data highlight that a charge-state-dependent fragmentation regime and a multiple algorithm search strategy are likely to be of benefit for confident large-scale phosphosite localization.
Steam distribution and energy delivery optimization using wireless sensors
NASA Astrophysics Data System (ADS)
Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.
2011-05-01
The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.
Solving optimization problems on computational grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S. J.; Mathematics and Computer Science
2001-05-01
Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.
Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian
2011-01-13
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.
Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm
2010-01-01
Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure−activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. PMID:24900251
Flow optimization study of a batch microfluidics PET tracer synthesizing device
Elizarov, Arkadij M.; Meinhart, Carl; van Dam, R. Michael; Huang, Jiang; Daridon, Antoine; Heath, James R.; Kolb, Hartmuth C.
2010-01-01
We present numerical modeling and experimental studies of flow optimization inside a batch microfluidic micro-reactor used for synthesis of human-scale doses of Positron Emission Tomography (PET) tracers. Novel techniques are used for mixing within, and eluting liquid out of, the coin-shaped reaction chamber. Numerical solutions of the general incompressible Navier Stokes equations along with time-dependent elution scalar field equation for the three dimensional coin-shaped geometry were obtained and validated using fluorescence imaging analysis techniques. Utilizing the approach presented in this work, we were able to identify optimized geometrical and operational conditions for the micro-reactor in the absence of radioactive material commonly used in PET related tracer production platforms as well as evaluate the designed and fabricated micro-reactor using numerical and experimental validations. PMID:21072595
Structural analysis for preliminary design of High Speed Civil Transport (HSCT)
NASA Technical Reports Server (NTRS)
Bhatia, Kumar G.
1992-01-01
In the preliminary design environment, there is a need for quick evaluation of configuration and material concepts. The simplified beam representations used in the subsonic, high aspect ratio wing platform are not applicable for low aspect ratio configurations typical of supersonic transports. There is a requirement to develop methods for efficient generation of structural arrangement and finite element representation to support multidisciplinary analysis and optimization. In addition, empirical data bases required to validate prediction methods need to be improved for high speed civil transport (HSCT) type configurations.
Li, Hongsheng
2018-01-01
This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords “ankle∗,” and “robot∗,” and (“rehabilitat∗” or “treat∗”). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms. PMID:29736230
Miao, Qing; Zhang, Mingming; Wang, Congzhe; Li, Hongsheng
2018-01-01
This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords "ankle ∗ ," and "robot ∗ ," and ("rehabilitat ∗ " or "treat ∗ "). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms.
Network-based drug discovery by integrating systems biology and computational technologies
Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua
2013-01-01
Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768
Metallo-Graphene Nanocomposite Electrocatalytic Platform for the Determination of Toxic Metal Ions
Willemse, Chandre M.; Tlhomelang, Khotso; Jahed, Nazeem; Baker, Priscilla G.; Iwuoha, Emmanuel I.
2011-01-01
A Nafion-Graphene (Nafion-G) nanocomposite solution in combination with an in situ plated mercury film electrode was used as a highly sensitive electrochemical platform for the determination of Zn2+, Cd2+, Pb2+ and Cu2+ in 0.1 M acetate buffer (pH 4.6) by square-wave anodic stripping voltammetry (SWASV). Various operational parameters such as deposition potential, deposition time and electrode rotation speed were optimized. The Nafion-G nanocomposite sensing platform exhibited improved sensitivity for metal ion detection, in addition to well defined, reproducible and sharp stripping signals. The linear calibration curves ranged from 1 μg L−1 to 7 μg L−1 for individual analysis. The detection limits (3σ blank/slope) obtained were 0.07 μg L−1 for Pb2+, Zn2+ and Cu2+ and 0.08 μg L−1 for Cd2+ at a deposition time of 120 s. For practical applications recovery studies was done by spiking test samples with known concentrations and comparing the results with inductively coupled plasma mass spectrometry (ICP-MS) analyses. This was followed by real sample analysis. PMID:22163831
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.
A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells
Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.
2017-01-01
Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402
Urban Rain Gauge Siting Selection Based on Gis-Multicriteria Analysis
NASA Astrophysics Data System (ADS)
Fu, Yanli; Jing, Changfeng; Du, Mingyi
2016-06-01
With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS) aided by geographical information system (GIS) has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA). A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Xi-cam: Flexible High Throughput Data Processing for GISAXS
NASA Astrophysics Data System (ADS)
Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander
With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.
NASA Astrophysics Data System (ADS)
Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary
2015-10-01
Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.
Design of a portable electronic nose for real-fake detection of liquors
NASA Astrophysics Data System (ADS)
Qi, Pei-Feng; Zeng, Ming; Li, Zhi-Hua; Sun, Biao; Meng, Qing-Hao
2017-09-01
Portability is a major issue that influences the practical application of electronic noses (e-noses). For liquors detection, an e-nose must preprocess the liquid samples (e.g., using evaporation and thermal desorption), which makes the portable design even more difficult. To realize convenient and rapid detection of liquors, we designed a portable e-nose platform that consists of hardware and software systems. The hardware system contains an evaporation/sampling module, a reaction module, a control/data acquisition and analysis module, and a power module. The software system provides a user-friendly interface and can achieve automatic sampling and data processing. This e-nose platform has been applied to the real-fake recognition of Chinese liquors. Through parameter optimization of a one-class support vector machine classifier, the error rate of the negative samples is greatly reduced, and the overall recognition accuracy is improved. The results validated the feasibility of the designed portable e-nose platform.
Analysis of Aurora's Performance Simulation Engine for Three Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine; Simon, Joseph
2015-07-07
Aurora Solar Inc. is building a cloud-based optimization platform to automate the design, engineering, and permit generation process of solar photovoltaic (PV) installations. They requested that the National Renewable Energy Laboratory (NREL) validate the performance of the PV system performance simulation engine of Aurora Solar’s solar design platform, Aurora. In previous work, NREL performed a validation of multiple other PV modeling tools 1, so this study builds upon that work by examining all of the same fixed-tilt systems with available module datasheets that NREL selected and used in the aforementioned study. Aurora Solar set up these three operating PV systemsmore » in their modeling platform using NREL-provided system specifications and concurrent weather data. NREL then verified the setup of these systems, ran the simulations, and compared the Aurora-predicted performance data to measured performance data for those three systems, as well as to performance data predicted by other PV modeling tools.« less
S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry
NASA Astrophysics Data System (ADS)
Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime
2013-05-01
S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.
Targeting multiple heterogeneous hardware platforms with OpenCL
NASA Astrophysics Data System (ADS)
Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.
2014-06-01
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.
Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia
2018-05-03
Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Ugoletti, Roberto; Sulla, Jeff
1992-01-01
The evolution and optimization of a real-time digital control system is presented. The control system is part of a testbed used to perform focused technology research on the interactions of spacecraft platform and instrument controllers with the flexible-body dynamics of the platform and platform appendages. The control system consists of Computer Automated Measurement and Control (CAMAC) standard data acquisition equipment interfaced to a workstation computer. The goal of this work is to optimize the control system's performance to support controls research using controllers with up to 50 states and frame rates above 200 Hz. The original system could support a 16-state controller operating at a rate of 150 Hz. By using simple yet effective software improvements, Input/Output (I/O) latencies and contention problems are reduced or eliminated in the control system. The final configuration can support a 16-state controller operating at 475 Hz. Effectively the control system's performance was increased by a factor of 3.
Novel application of digital microfluidics for the detection of biotinidase deficiency in newborns.
Graham, Carrie; Sista, Ramakrishna S; Kleinert, Jairus; Wu, Ning; Eckhardt, Allen; Bali, Deeksha; Millington, David S; Pamula, Vamsee K
2013-12-01
Newborn screening for biotinidase deficiency can be performed using a fluorometric enzyme assay on dried blood spot specimens. As a pre-requisite to the consolidation of different enzymatic assays onto a single platform, we describe here a novel analytical method for detecting biotinidase deficiency using the same digital microfluidic cartridge that has already been demonstrated to screen for five lysosomal storage diseases (Pompe, Fabry, Gaucher, Hurler and Hunter) in a multiplex format. A novel assay to quantify biotinidase concentration in dried blood spots (DBS) was developed and optimized on the digital microfluidic platform using proficiency testing samples from the Centers for Disease Control and Prevention. The enzymatic assay uses 4-methylumbelliferyl biotin as the fluorogenic substrate. Biotinidase deficiency assays were performed on normal (n=200) and deficient (n=7) newborn DBS specimens. Enzymatic activity analysis of biotinidase deficiency revealed distinct separation between normal and affected DBS specimens using digital microfluidics and these results matched the expected activity. This study has demonstrated performance of biotinidase deficiency assays by measurement of 4-methylumbelliferyl product on a digital microfluidic platform. Due to the inherent ease in multiplexing on such a platform, consolidation of other fluorometric assays onto a single cartridge may be realized. © 2013.
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Market-Based and System-Wide Fuel Cycle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew
This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.
PR-PR: Cross-Platform Laboratory Automation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linshiz, G; Stawski, N; Goyal, G
To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Goldenmore » Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.« less
PR-PR: cross-platform laboratory automation system.
Linshiz, Gregory; Stawski, Nina; Goyal, Garima; Bi, Changhao; Poust, Sean; Sharma, Monica; Mutalik, Vivek; Keasling, Jay D; Hillson, Nathan J
2014-08-15
To enable protocol standardization, sharing, and efficient implementation across laboratory automation platforms, we have further developed the PR-PR open-source high-level biology-friendly robot programming language as a cross-platform laboratory automation system. Beyond liquid-handling robotics, PR-PR now supports microfluidic and microscopy platforms, as well as protocol translation into human languages, such as English. While the same set of basic PR-PR commands and features are available for each supported platform, the underlying optimization and translation modules vary from platform to platform. Here, we describe these further developments to PR-PR, and demonstrate the experimental implementation and validation of PR-PR protocols for combinatorial modified Golden Gate DNA assembly across liquid-handling robotic, microfluidic, and manual platforms. To further test PR-PR cross-platform performance, we then implement and assess PR-PR protocols for Kunkel DNA mutagenesis and hierarchical Gibson DNA assembly for microfluidic and manual platforms.
The pitfalls of platform comparison: DNA copy number array technologies assessed
2009-01-01
Background The accurate and high resolution mapping of DNA copy number aberrations has become an important tool by which to gain insight into the mechanisms of tumourigenesis. There are various commercially available platforms for such studies, but there remains no general consensus as to the optimal platform. There have been several previous platform comparison studies, but they have either described older technologies, used less-complex samples, or have not addressed the issue of the inherent biases in such comparisons. Here we describe a systematic comparison of data from four leading microarray technologies (the Affymetrix Genome-wide SNP 5.0 array, Agilent High-Density CGH Human 244A array, Illumina HumanCNV370-Duo DNA Analysis BeadChip, and the Nimblegen 385 K oligonucleotide array). We compare samples derived from primary breast tumours and their corresponding matched normals, well-established cancer cell lines, and HapMap individuals. By careful consideration and avoidance of potential sources of bias, we aim to provide a fair assessment of platform performance. Results By performing a theoretical assessment of the reproducibility, noise, and sensitivity of each platform, notable differences were revealed. Nimblegen exhibited between-replicate array variances an order of magnitude greater than the other three platforms, with Agilent slightly outperforming the others, and a comparison of self-self hybridizations revealed similar patterns. An assessment of the single probe power revealed that Agilent exhibits the highest sensitivity. Additionally, we performed an in-depth visual assessment of the ability of each platform to detect aberrations of varying sizes. As expected, all platforms were able to identify large aberrations in a robust manner. However, some focal amplifications and deletions were only detected in a subset of the platforms. Conclusion Although there are substantial differences in the design, density, and number of replicate probes, the comparison indicates a generally high level of concordance between platforms, despite differences in the reproducibility, noise, and sensitivity. In general, Agilent tended to be the best aCGH platform and Affymetrix, the superior SNP-CGH platform, but for specific decisions the results described herein provide a guide for platform selection and study design, and the dataset a resource for more tailored comparisons. PMID:19995423
Stec, James; Wang, Jing; Coombes, Kevin; Ayers, Mark; Hoersch, Sebastian; Gold, David L.; Ross, Jeffrey S; Hess, Kenneth R.; Tirrell, Stephen; Linette, Gerald; Hortobagyi, Gabriel N.; Symmans, W. Fraser; Pusztai, Lajos
2005-01-01
We examined how well differentially expressed genes and multigene outcome classifiers retain their class-discriminating values when tested on data generated by different transcriptional profiling platforms. RNA from 33 stage I-III breast cancers was hybridized to both Affymetrix GeneChip and Millennium Pharmaceuticals cDNA arrays. Only 30% of all corresponding gene expression measurements on the two platforms had Pearson correlation coefficient r ≥ 0.7 when UniGene was used to match probes. There was substantial variation in correlation between different Affymetrix probe sets matched to the same cDNA probe. When cDNA and Affymetrix probes were matched by basic local alignment tool (BLAST) sequence identity, the correlation increased substantially. We identified 182 genes in the Affymetrix and 45 in the cDNA data (including 17 common genes) that accurately separated 91% of cases in supervised hierarchical clustering in each data set. Cross-platform testing of these informative genes resulted in lower clustering accuracy of 45 and 79%, respectively. Several sets of accurate five-gene classifiers were developed on each platform using linear discriminant analysis. The best 100 classifiers showed average misclassification error rate of 2% on the original data that rose to 19.5% when tested on data from the other platform. Random five-gene classifiers showed misclassification error rate of 33%. We conclude that multigene predictors optimized for one platform lose accuracy when applied to data from another platform due to missing genes and sequence differences in probes that result in differing measurements for the same gene. PMID:16049308
Correlation between y-type ions observed in ion trap and triple quadrupole mass spectrometers.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Vitek, Olga; Martin, Daniel B
2009-09-01
Multiple reaction monitoring mass spectrometry (MRM-MS) is a technique for high-sensitivity targeted analysis. In proteomics, MRM-MS can be used to monitor and quantify a peptide based on the production of expected fragment peaks from the selected peptide precursor ion. The choice of which fragment ions to monitor in order to achieve maximum sensitivity in MRM-MS can potentially be guided by existing MS/MS spectra. However, because the majority of discovery experiments are performed on ion trap platforms, there is concern in the field regarding the generalizability of these spectra to MRM-MS on a triple quadrupole instrument. In light of this concern, many operators perform an optimization step to determine the most intense fragments for a target peptide on a triple quadrupole mass spectrometer. We have addressed this issue by targeting, on a triple quadrupole, the top six y-ion peaks from ion trap-derived consensus library spectra for 258 doubly charged peptides from three different sample sets and quantifying the observed elution curves. This analysis revealed a strong correlation between the y-ion peak rank order and relative intensity across platforms. This suggests that y-type ions obtained from ion trap-based library spectra are well-suited for generating MRM-MS assays for triple quadrupoles and that optimization is not required for each target peptide.
The Role of Bed Roughness in Wave Transformation Across Sloping Rock Shore Platforms
NASA Astrophysics Data System (ADS)
Poate, Tim; Masselink, Gerd; Austin, Martin J.; Dickson, Mark; McCall, Robert
2018-01-01
We present for the first time observations and model simulations of wave transformation across sloping (Type A) rock shore platforms. Pressure measurements of the water surface elevation using up to 15 sensors across five rock platforms with contrasting roughness, gradient, and wave climate represent the most extensive collected, both in terms of the range of environmental conditions, and the temporal and spatial resolution. Platforms are shown to dissipate both incident and infragravity wave energy as skewness and asymmetry develop and, in line with previous studies, surf zone wave heights are saturated and strongly tidally modulated. Overall, the observed properties of the waves and formulations derived from sandy beaches do not highlight any systematic interplatform variation, in spite of significant differences in platform roughness, suggesting that friction can be neglected when studying short wave transformation. Optimization of a numerical wave transformation model shows that the wave breaker criterion falls between the range of values reported for flat sandy beaches and those of steep coral fore reefs. However, the optimized drag coefficient shows significant scatter for the roughest sites and an alternative empirical drag model, based on the platform roughness, does not improve model performance. Thus, model results indicate that the parameterization of frictional drag using the bottom roughness length-scale may be inappropriate for the roughest platforms. Based on these results, we examine the balance of wave breaking to frictional dissipation for rock platforms and find that friction is only significant for very rough, flat platforms during small wave conditions outside the surf zone.
Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems
NASA Astrophysics Data System (ADS)
Mullen, Douglas Gurnett
Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules can be attached and has the potential to dramatically improve platform efficacy. An additional investigation of reproducibility challenges for current dendrimer-based platform designs is also described. The mass transport quality during the partial acetylation reaction of the dendrimer was found to have a major impact on subsequent dendrimer-ligand distributions that cannot be detected by standard analytical techniques. Consequently, this reaction should be eliminated from the platform design. Finally, optimized protocols for purification and characterization of PAMAM dendrimer were detailed.
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
2017-04-17
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.
We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Na, Y; Suh, T; Xing, L
2012-06-01
Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.
Hwang, Sang Mee; Lee, Ki Chan; Lee, Min Seob; Park, Kyoung Un
2018-01-01
Transition to next generation sequencing (NGS) for BRCA1 / BRCA2 analysis in clinical laboratories is ongoing but different platforms and/or data analysis pipelines give different results resulting in difficulties in implementation. We have evaluated the Ion Personal Genome Machine (PGM) Platforms (Ion PGM, Ion PGM Dx, Thermo Fisher Scientific) for the analysis of BRCA1 /2. The results of Ion PGM with OTG-snpcaller, a pipeline based on Torrent mapping alignment program and Genome Analysis Toolkit, from 75 clinical samples and 14 reference DNA samples were compared with Sanger sequencing for BRCA1 / BRCA2 . Ten clinical samples and 14 reference DNA samples were additionally sequenced by Ion PGM Dx with Torrent Suite. Fifty types of variants including 18 pathogenic or variants of unknown significance were identified from 75 clinical samples and known variants of the reference samples were confirmed by Sanger sequencing and/or NGS. One false-negative results were present for Ion PGM/OTG-snpcaller for an indel variant misidentified as a single nucleotide variant. However, eight discordant results were present for Ion PGM Dx/Torrent Suite with both false-positive and -negative results. A 40-bp deletion, a 4-bp deletion and a 1-bp deletion variant was not called and a false-positive deletion was identified. Four other variants were misidentified as another variant. Ion PGM/OTG-snpcaller showed acceptable performance with good concordance with Sanger sequencing. However, Ion PGM Dx/Torrent Suite showed many discrepant results not suitable for use in a clinical laboratory, requiring further optimization of the data analysis for calling variants.
A Bioimpedance Analysis Platform for Amputee Residual Limb Assessment.
Sanders, Joan E; Moehring, Mark A; Rothlisberger, Travis M; Phillips, Reid H; Hartley, Tyler; Dietrich, Colin R; Redd, Christian B; Gardner, David W; Cagle, John C
2016-08-01
The objective of this research was to develop a bioimpedance platform for monitoring fluid volume in residual limbs of people with trans-tibial limb loss using prostheses. A customized multifrequency current stimulus profile was sent to thin flat electrodes positioned on the thigh and distal residual limb. The applied current signal and sensed voltage signals from four pairs of electrodes located on the anterior and posterior surfaces were demodulated into resistive and reactive components. An established electrical model (Cole) and segmental limb geometry model were used to convert results to extracellular and intracellular fluid volumes. Bench tests and testing on amputee participants were conducted to optimize the stimulus profile and electrode design and layout. The proximal current injection electrode needed to be at least 25 cm from the proximal voltage sensing electrode. A thin layer of hydrogel needed to be present during testing to ensure good electrical coupling. Using a burst duration of 2.0 ms, intermission interval of 100 μs, and sampling delay of 10 μs at each of 24 frequencies except 5 kHz, which required a 200-μs sampling delay, the system achieved a sampling rate of 19.7 Hz. The designed bioimpedance platform allowed system settings and electrode layouts and positions to be optimized for amputee limb fluid volume measurement. The system will be useful toward identifying and ranking prosthetic design features and participant characteristics that impact residual limb fluid volume.
Design and formulation of functional pluripotent stem cell-derived cardiac microtissues
Thavandiran, Nimalan; Dubois, Nicole; Mikryukov, Alexander; Massé, Stéphane; Beca, Bogdan; Simmons, Craig A.; Deshpande, Vikram S.; McGarry, J. Patrick; Chen, Christopher S.; Nanthakumar, Kumaraswamy; Keller, Gordon M.; Radisic, Milica; Zandstra, Peter W.
2013-01-01
Access to robust and information-rich human cardiac tissue models would accelerate drug-based strategies for treating heart disease. Despite significant effort, the generation of high-fidelity adult-like human cardiac tissue analogs remains challenging. We used computational modeling of tissue contraction and assembly mechanics in conjunction with microfabricated constraints to guide the design of aligned and functional 3D human pluripotent stem cell (hPSC)-derived cardiac microtissues that we term cardiac microwires (CMWs). Miniaturization of the platform circumvented the need for tissue vascularization and enabled higher-throughput image-based analysis of CMW drug responsiveness. CMW tissue properties could be tuned using electromechanical stimuli and cell composition. Specifically, controlling self-assembly of 3D tissues in aligned collagen, and pacing with point stimulation electrodes, were found to promote cardiac maturation-associated gene expression and in vivo-like electrical signal propagation. Furthermore, screening a range of hPSC-derived cardiac cell ratios identified that 75% NKX2 Homeobox 5 (NKX2-5)+ cardiomyocytes and 25% Cluster of Differentiation 90 OR (CD90)+ nonmyocytes optimized tissue remodeling dynamics and yielded enhanced structural and functional properties. Finally, we demonstrate the utility of the optimized platform in a tachycardic model of arrhythmogenesis, an aspect of cardiac electrophysiology not previously recapitulated in 3D in vitro hPSC-derived cardiac microtissue models. The design criteria identified with our CMW platform should accelerate the development of predictive in vitro assays of human heart tissue function. PMID:24255110
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1990-01-01
A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.
Brown, Roger B; Madrid, Nathaniel J; Suzuki, Hideaki; Ness, Scott A
2017-01-01
RNA-sequencing (RNA-seq) has become the standard method for unbiased analysis of gene expression but also provides access to more complex transcriptome features, including alternative RNA splicing, RNA editing, and even detection of fusion transcripts formed through chromosomal translocations. However, differences in library methods can adversely affect the ability to recover these different types of transcriptome data. For example, some methods have bias for one end of transcripts or rely on low-efficiency steps that limit the complexity of the resulting library, making detection of rare transcripts less likely. We tested several commonly used methods of RNA-seq library preparation and found vast differences in the detection of advanced transcriptome features, such as alternatively spliced isoforms and RNA editing sites. By comparing several different protocols available for the Ion Proton sequencer and by utilizing detailed bioinformatics analysis tools, we were able to develop an optimized random primer based RNA-seq technique that is reliable at uncovering rare transcript isoforms and RNA editing features, as well as fusion reads from oncogenic chromosome rearrangements. The combination of optimized libraries and rapid Ion Proton sequencing provides a powerful platform for the transcriptome analysis of research and clinical samples.
Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)
NASA Astrophysics Data System (ADS)
Hancher, M.
2013-12-01
Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.
Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping
2016-02-11
Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.
a Real-Time GIS Platform for High Sour Gas Leakage Simulation, Evaluation and Visualization
NASA Astrophysics Data System (ADS)
Li, M.; Liu, H.; Yang, C.
2015-07-01
The development of high-sulfur gas fields, also known as sour gas field, is faced with a series of safety control and emergency management problems. The GIS-based emergency response system is placed high expectations under the consideration of high pressure, high content, complex terrain and highly density population in Sichuan Basin, southwest China. The most researches on high hydrogen sulphide gas dispersion simulation and evaluation are used for environmental impact assessment (EIA) or emergency preparedness planning. This paper introduces a real-time GIS platform for high-sulfur gas emergency response. Combining with real-time data from the leak detection systems and the meteorological monitoring stations, GIS platform provides the functions of simulating, evaluating and displaying of the different spatial-temporal toxic gas distribution patterns and evaluation results. This paper firstly proposes the architecture of Emergency Response/Management System, secondly explains EPA's Gaussian dispersion model CALPUFF simulation workflow under high complex terrain and real-time data, thirdly explains the emergency workflow and spatial analysis functions of computing the accident influencing areas, population and the optimal evacuation routes. Finally, a well blow scenarios is used for verify the system. The study shows that GIS platform which integrates the real-time data and CALPUFF models will be one of the essential operational platforms for high-sulfur gas fields emergency management.
Integrated Microfluidic Lectin Barcode Platform for High-Performance Focused Glycomic Profiling
NASA Astrophysics Data System (ADS)
Shang, Yuqin; Zeng, Yun; Zeng, Yong
2016-02-01
Protein glycosylation is one of the key processes that play essential roles in biological functions and dysfunctions. However, progress in glycomics has considerably lagged behind genomics and proteomics, due in part to the enormous challenges in analysis of glycans. Here we present a new integrated and automated microfluidic lectin barcode platform to substantially improve the performance of lectin array for focused glycomic profiling. The chip design and flow control were optimized to promote the lectin-glycan binding kinetics and speed of lectin microarray. Moreover, we established an on-chip lectin assay which employs a very simple blocking method to effectively suppress the undesired background due to lectin binding of antibodies. Using this technology, we demonstrated focused differential profiling of tissue-specific glycosylation changes of a biomarker, CA125 protein purified from ovarian cancer cell line and different tissues from ovarian cancer patients in a fast, reproducible, and high-throughput fashion. Highly sensitive CA125 detection was also demonstrated with a detection limit much lower than the clinical cutoff value for cancer diagnosis. This microfluidic platform holds the potential to integrate with sample preparation functions to construct a fully integrated “sample-to-answer” microsystem for focused differential glycomic analysis. Thus, our technology should present a powerful tool in support of rapid advance in glycobiology and glyco-biomarker development.
Integrated Microfluidic Lectin Barcode Platform for High-Performance Focused Glycomic Profiling
Shang, Yuqin; Zeng, Yun; Zeng, Yong
2016-01-01
Protein glycosylation is one of the key processes that play essential roles in biological functions and dysfunctions. However, progress in glycomics has considerably lagged behind genomics and proteomics, due in part to the enormous challenges in analysis of glycans. Here we present a new integrated and automated microfluidic lectin barcode platform to substantially improve the performance of lectin array for focused glycomic profiling. The chip design and flow control were optimized to promote the lectin-glycan binding kinetics and speed of lectin microarray. Moreover, we established an on-chip lectin assay which employs a very simple blocking method to effectively suppress the undesired background due to lectin binding of antibodies. Using this technology, we demonstrated focused differential profiling of tissue-specific glycosylation changes of a biomarker, CA125 protein purified from ovarian cancer cell line and different tissues from ovarian cancer patients in a fast, reproducible, and high-throughput fashion. Highly sensitive CA125 detection was also demonstrated with a detection limit much lower than the clinical cutoff value for cancer diagnosis. This microfluidic platform holds the potential to integrate with sample preparation functions to construct a fully integrated “sample-to-answer” microsystem for focused differential glycomic analysis. Thus, our technology should present a powerful tool in support of rapid advance in glycobiology and glyco-biomarker development. PMID:26831207
Xyce parallel electronic simulator users guide, version 6.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users' guide, Version 6.0.1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Xyce parallel electronic simulator users guide, version 6.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R; Mei, Ting; Russo, Thomas V.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Establish a Data Transmission Platform of the Rig Based on the Distributed Network
NASA Astrophysics Data System (ADS)
Bao, Zefu; Li, Tao
In order to control in real-time ,closed-loop feedback the information, saving the money and labor,we distribute a platform of network data. It through the establishment of the platform in the oil drilling to achieve the easiest route of each device of the rig that conveying timely. The design proposed the platform to transfer networking data by PA which allows the rig control for optimal use. Against the idea,achieving first through on-site cabling and the establishment of data transmission module in the rig monitoring system. The results of standard field application show that the platform solve the problem of rig control.
Zhang, Yun-jian; Li, Qiang; Zhang, Yu-xiu; Wang, Dan; Xing, Jian-min
2012-01-01
Succinic acid is considered as an important platform chemical. Succinic acid fermentation with Actinobacillus succinogenes strain BE-1 was optimized by central composite design (CCD) using a response surface methodology (RSM). The optimized production of succinic acid was predicted and the interactive effects between glucose, yeast extract, and magnesium carbonate were investigated. As a result, a model for predicting the concentration of succinic acid production was developed. The accuracy of the model was confirmed by the analysis of variance (ANOVA), and the validity was further proved by verification experiments showing that percentage errors between actual and predicted values varied from 3.02% to 6.38%. In addition, it was observed that the interactive effect between yeast extract and magnesium carbonate was statistically significant. In conclusion, RSM is an effective and useful method for optimizing the medium components and investigating the interactive effects, and can provide valuable information for succinic acid scale-up fermentation using A. succinogenes strain BE-1. PMID:22302423
Relationship Between Optimal Gain and Coherence Zone in Flight Simulation
NASA Technical Reports Server (NTRS)
Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.
2011-01-01
In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.
Papadakis, G; Friedt, J M; Eck, M; Rabus, D; Jobst, G; Gizeli, E
2017-09-01
The development of integrated platforms incorporating an acoustic device as the detection element requires addressing simultaneously several challenges of technological and scientific nature. The present work was focused on the design of a microfluidic module, which, combined with a dual or array type Love wave acoustic chip could be applied to biomedical applications and molecular diagnostics. Based on a systematic study we optimized the mechanics of the flow cell attachment and the sealing material so that fluidic interfacing/encapsulation would impose minimal losses to the acoustic wave. We have also investigated combinations of operating frequencies with waveguide materials and thicknesses for maximum sensitivity during the detection of protein and DNA biomarkers. Within our investigations neutravidin was used as a model protein biomarker and unpurified PCR amplified Salmonella DNA as the model genetic target. Our results clearly indicate the need for experimental verification of the optimum engineering and analytical parameters, in order to develop commercially viable systems for integrated analysis. The good reproducibility of the signal together with the ability of the array biochip to detect multiple samples hold promise for the future use of the integrated system in a Lab-on-a-Chip platform for application to molecular diagnostics.
Microfluidic Platform for High-throughput Screening of Leach Chemistry.
Yang, Die; Priest, Craig
2018-06-20
We demonstrate an optofluidic screening platform for studying thiosulfate leaching of Au in a transparent microchannel. The approach permits in situ (optical) monitoring of Au thickness, reduced reagent use, rapid optimization of reagent chem-istry, screening of temperature, and determination of the activation energy. The results demonstrate the critical importance of the (1) preparation and storage of the leach solution, (2) deposition and annealing of the Au film, and (3) lixiviant chem-istry. The density of sputter deposited Au films decreased with depth resulting in accelerating leach rates during experiments. Atomic leach rates were determined and were constant throughout each experiment. Annealing above 270 °C was found to prevent leaching, which can be attributed to diffusion of the chromium adhesion layer into the Au film. The optofluidic analysis revealed leach rates that are sensitive to the stoichiometric ratio of thiosulphate, ammonia and copper in the leach solution, and optimized for 10 mM CuSO 4 , 1 M Na 2 S 2 O 3 and 1 M NH 4 OH. The temperature dependence of the leach rate gave an apparent activation energy of ~ 40 kJ.mol -1 , based on Arrhenius' relationship.
Natural orifice translumenal endoscopic surgery: Progress in humans since white paper
Santos, Byron F; Hungness, Eric S
2011-01-01
Since the first description of the concept of natural orifice translumenal endoscopic surgery (NOTES), a substantial number of clinical NOTES reports have appeared in the literature. This editorial reviews the available human data addressing research questions originally proposed by the white paper, including determining the optimal method of access for NOTES, developing safe methods of lumenal closure, suturing and anastomotic devices, advanced multitasking platforms, addressing the risk of infection, managing complications, addressing challenges with visualization, and training for NOTES procedures. An analysis of the literature reveals that so far transvaginal access and closure appear to be the most feasible techniques for NOTES, with a limited, but growing transgastric, transrectal, and transesophageal NOTES experience in humans. The theoretically increased risk of infection as a result of NOTES procedures has not been substantiated in transvaginal and transgastric procedures so far. Development of suturing and anastomotic devices and advanced platforms for NOTES has progressed slowly, with limited clinical data on their use so far. Data on the optimal management and incidence of intraoperative complications remain sparse, although possible factors contributing to complications are discussed. Finally, this editorial discusses the likely direction of future NOTES development and its possible role in clinical practice. PMID:21483624
Makssoud, Hassan El; Richards, Carol L; Comeau, François
2009-01-01
Virtual reality (VR) technology offers the opportunity to expose patients to complex physical environments without physical danger and thus provides a wide range of opportunities for locomotor training or the study of human postural and walking behavior. A VR-based locomotor training system has been developed for gait rehabilitation post-stroke. A clinical study has shown that persons after stroke are able to adapt and benefit from this novel system wherein they walk into virtual environments (VEs) on a self-paced treadmill mounted on a platform with 6 degrees of freedom. This platform is programmed to mimic changes in the terrain encountered in the VEs. While engaging in these VEs, excessive trunk movements and speed alterations have been observed, especially during the pitch perturbations accompanying uphill or downhill terrain changes. An in-depth study of the subject's behavior in relation to the platform movements revealed that the platform rotational axes need to be modified, as previously shown by Barton et al, and in addition did not consider the subject's position on the treadmill. The aim of this study was to determine an optimal solution to simulate walking in real life when engaging in VEs.
Knowledge discovery through games and game theory
NASA Astrophysics Data System (ADS)
Smith, James F., III; Rhyne, Robert D.
2001-03-01
A fuzzy logic based expert system has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar platforms. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. The initial version of the algorithm was optimized using a genetic algorithm employing fitness functions constructed based on expertise. A new approach is being explored that involves embedding the resource manager in a electronic game environment. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the database as required. The game allows easy evaluation of the information mined in the second step. The measure of effectiveness (MOE) for re-optimization is discussed. The mined information is extremely valuable as shown through demanding scenarios.
Design, Development and Validation of the Eurostar 3000 Large Propellant Tank
NASA Astrophysics Data System (ADS)
Autric, J.-M.; Catherall, D.; Figues, C.; Brockhoff, T.; Lafranconi, R.
2004-10-01
EADS Astrium has undertaken the design and development of an enlarged propellant tank for its high modular Eurostar 3000 telecom satellites platform. The design and development activities included fracture, stress and functional analysis, the manufacturing of development models for the propellant management device, the qualification of new manufacturing processes and the optimization of the design with respect to the main requirements. The successful design and development-testing phase has allowed starting the manufacturing of the qualification model.
Liu, Zhanyu
2017-09-01
By analyzing the current hospital anti hepatitis drug use, dosage, indications and drug resistance, this article studied the drug inventory management and cost optimization. The author used drug utilization evaluation method, analyzed the amount and kind distribution of anti hepatitis drugs and made dynamic monitoring of inventory. At the same time, the author puts forward an effective scheme of drug classification management, uses the ABC classification method to classify the drugs according to the average daily dose of drugs, and implements the automatic replenishment plan. The design of pharmaceutical services supply chain includes drug procurement platform, warehouse management system and connect to the hospital system through data exchange. Through the statistical analysis of drug inventory, we put forward the countermeasures of drug logistics optimization. The results showed that drug replenishment plan can effectively improve drugs inventory efficiency.
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Exploring quantum computing application to satellite data assimilation
NASA Astrophysics Data System (ADS)
Cheung, S.; Zhang, S. Q.
2015-12-01
This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.
NASA Astrophysics Data System (ADS)
Li, Ke Sherry; Chu, Phillip Y.; Fourie-O'Donohue, Aimee; Srikumar, Neha; Kozak, Katherine R.; Liu, Yichin; Tran, John C.
2018-05-01
Antibody-drug conjugates (ADCs) present unique challenges for ligand-binding assays primarily due to the dynamic changes of the drug-to-antibody ratio (DAR) distribution in vivo and in vitro. Here, an automated on-tip affinity capture platform with subsequent mass spectrometry analysis was developed to accurately characterize the DAR distribution of ADCs from biological matrices. A variety of elution buffers were tested to offer optimal recovery, with trastuzumab serving as a surrogate to the ADCs. High assay repeatability (CV 3%) was achieved for trastuzumab antibody when captured below the maximal binding capacity of 7.5 μg. Efficient on-tip deglycosylation was also demonstrated in 1 h followed by affinity capture. Moreover, this tip-based platform affords higher throughput for DAR characterization when compared with a well-characterized bead-based method.
An Intrinsically Digital Amplification Scheme for Hearing Aids
NASA Astrophysics Data System (ADS)
Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.
2005-12-01
Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
NASA Astrophysics Data System (ADS)
Liu, Qi; Zhang, Cheng; Ding, Xianting; Deng, Hui; Zhang, Daming; Cui, Wei; Xu, Hongwei; Wang, Yingwei; Xu, Wanhai; Lv, Lei; Zhang, Hongyu; He, Yinghua; Wu, Qiong; Szyf, Moshe; Ho, Chih-Ming; Zhu, Jingde
2015-06-01
Therapeutic outcomes of combination chemotherapy have not significantly advanced during the past decades. This has been attributed to the formidable challenges of optimizing drug combinations. Testing a matrix of all possible combinations of doses and agents in a single cell line is unfeasible due to the virtually infinite number of possibilities. We utilized the Feedback System Control (FSC) platform, a phenotype oriented approach to test 100 options among 15,625 possible combinations in four rounds of assaying to identify an optimal tri-drug combination in eight distinct chemoresistant bladder cancer cell lines. This combination killed between 82.86% and 99.52% of BCa cells, but only 47.47% of the immortalized benign bladder epithelial cells. Preclinical in vivo verification revealed its markedly enhanced anti-tumor efficacy as compared to its bi- or mono-drug components in cell line-derived tumor xenografts. The collective response of these pathways to component drugs was both cell type- and drug type specific. However, the entire spectrum of pathways triggered by the tri-drug regimen was similar in all four cancer cell lines, explaining its broad spectrum killing of BCa lines, which did not occur with its component drugs. Our findings here suggest that the FSC platform holdspromise for optimization of anti-cancer combination chemotherapy.
NASA Astrophysics Data System (ADS)
Ozbasaran, Hakan
Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.
Large-scale prediction of ADAR-mediated effective human A-to-I RNA editing.
Yao, Li; Wang, Heming; Song, Yuanyuan; Dai, Zhen; Yu, Hao; Yin, Ming; Wang, Dongxu; Yang, Xin; Wang, Jinlin; Wang, Tiedong; Cao, Nan; Zhu, Jimin; Shen, Xizhong; Song, Guangqi; Zhao, Yicheng
2017-08-10
Adenosine-to-inosine (A-to-I) editing by adenosine deaminase acting on the RNA (ADAR) proteins is one of the most frequent modifications during post- and co-transcription. To facilitate the assignment of biological functions to specific editing sites, we designed an automatic online platform to annotate A-to-I RNA editing sites in pre-mRNA splicing signals, microRNAs (miRNAs) and miRNA target untranslated regions (3' UTRs) from human (Homo sapiens) high-throughput sequencing data and predict their effects based on large-scale bioinformatic analysis. After analysing plenty of previously reported RNA editing events and human normal tissues RNA high-seq data, >60 000 potentially effective RNA editing events on functional genes were found. The RNA Editing Plus platform is available for free at https://www.rnaeditplus.org/, and we believe our platform governing multiple optimized methods will improve further studies of A-to-I-induced editing post-transcriptional regulation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.
Jung, Sang-Kyu; McDonald, Karen
2011-08-16
Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization
2011-01-01
Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353
GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.
Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A
2017-03-01
We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
Sharma, Atul; Hayat, Akhtar; Mishra, Rupesh K; Catanante, Gaëlle; Bhand, Sunil; Marty, Jean Louis
2015-09-22
We demonstrate for the first time, the development of titanium dioxide nanoparticles (TiO₂) quenching based aptasensing platform for detection of target molecules. TiO₂ quench the fluorescence of FAM-labeled aptamer (fluorescein labeled aptamer) upon the non-covalent adsorption of fluorescent labeled aptamer on TiO₂ surface. When OTA interacts with the aptamer, it induced aptamer G-quadruplex complex formation, weakens the interaction between FAM-labeled aptamer and TiO₂, resulting in fluorescence recovery. As a proof of concept, an assay was employed for detection of Ochratoxin A (OTA). At optimized experimental condition, the obtained limit of detection (LOD) was 1.5 nM with a good linearity in the range 1.5 nM to 1.0 µM for OTA. The obtained results showed the high selectivity of assay towards OTA without interference to structurally similar analogue Ochratoxin B (OTB). The developed aptamer assay was evaluated for detection of OTA in beer sample and recoveries were recorded in the range from 94.30%-99.20%. Analytical figures of the merits of the developed aptasensing platform confirmed its applicability to real samples analysis. However, this is a generic aptasensing platform and can be extended for detection of other toxins or target analyte.
Capaldi, Stefano
2014-01-01
In recent years, the production of recombinant pharmaceutical proteins in heterologous systems has increased significantly. Most applications involve complex proteins and glycoproteins that are difficult to produce, thus promoting the development and improvement of a wide range of production platforms. No individual system is optimal for the production of all recombinant proteins, so the diversity of platforms based on plants offers a significant advantage. Here, we discuss the production of four recombinant pharmaceutical proteins using different platforms, highlighting from these examples the unique advantages of plant-based systems over traditional fermenter-based expression platforms. PMID:24745008
Wade, James H; Jones, Joshua D; Lenov, Ivan L; Riordan, Colleen M; Sligar, Stephen G; Bailey, Ryan C
2017-08-22
The characterization of integral membrane proteins presents numerous analytical challenges on account of their poor activity under non-native conditions, limited solubility in aqueous solutions, and low expression in most cell culture systems. Nanodiscs are synthetic model membrane constructs that offer many advantages for studying membrane protein function by offering a native-like phospholipid bilayer environment. The successful incorporation of membrane proteins within Nanodiscs requires experimental optimization of conditions. Standard protocols for Nanodisc formation can require large amounts of time and input material, limiting the facile screening of formation conditions. Capitalizing on the miniaturization and efficient mass transport inherent to microfluidics, we have developed a microfluidic platform for efficient Nanodisc assembly and purification, and demonstrated the ability to incorporate functional membrane proteins into the resulting Nanodiscs. In addition to working with reduced sample volumes, this platform simplifies membrane protein incorporation from a multi-stage protocol requiring several hours or days into a single platform that outputs purified Nanodiscs in less than one hour. To demonstrate the utility of this platform, we incorporated Cytochrome P450 into Nanodiscs of variable size and lipid composition, and present spectroscopic evidence for the functional active site of the membrane protein. This platform is a promising new tool for membrane protein biology and biochemistry that enables tremendous versatility for optimizing the incorporation of membrane proteins using microfluidic gradients to screen across diverse formation conditions.
An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.
Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir
2013-01-01
DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.
Telemonitoring of patients with Parkinson's disease using inertia sensors.
Piro, N E; Baumann, L; Tengler, M; Piro, L; Blechschmidt-Trapp, R
2014-01-01
Medical treatment in patients suffering from Parkinson's disease is very difficult as dose-finding is mainly based on selective and subjective impressions by the physician. To allow for the objective evaluation of patients' symptoms required for optimal dosefinding, a telemonitoring system tracks the motion of patients in their surroundings. The system focuses on providing interoperability and usability in order to ensure high acceptance. Patients wear inertia sensors and perform standardized motor tasks. Data are recorded, processed and then presented to the physician in a 3D animated form. In addition, the same data is rated based on the UPDRS score. Interoperability is realized by developing the system in compliance with the recommendations of the Continua Health Alliance. Detailed requirements analysis and continuous collaboration with respective user groups help to achieve high usability. A sensor platform was developed that is capable of measuring acceleration and angular rate of motions as well as the absolute orientation of the device itself through an included compass sensor. The system architecture was designed and required infrastructure, and essential parts of the communication between the system components were implemented following Continua guidelines. Moreover, preliminary data analysis based on three-dimensional acceleration and angular rate data could be established. A prototype system for the telemonitoring of Parkinson's disease patients was successfully developed. The developed sensor platform fully satisfies the needs of monitoring patients of Parkinson's disease and is comparable to other sensor platforms, although these sensor platforms have yet to be tested rigorously against each other. Suitable approaches to provide interoperability and usability were identified and realized and remain to be tested in the field.
Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita
2018-04-01
We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Wei; Shabbir, Faizan; Gong, Chao
2015-04-13
We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less
Post, Harm; Penning, Renske; Fitzpatrick, Martin A; Garrigues, Luc B; Wu, W; MacGillavry, Harold D; Hoogenraad, Casper C; Heck, Albert J R; Altelaar, A F Maarten
2017-02-03
Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC-MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts, placing new demands on enrichment protocols to make them less labor-intensive, more sensitive, and less prone to variability. Here we assessed an automated enrichment protocol using Fe(III)-IMAC cartridges on an AssayMAP Bravo platform to meet these demands. The automated Fe(III)-IMAC-based enrichment workflow proved to be more effective when compared to a TiO 2 -based enrichment using the same platform and a manual Ti(IV)-IMAC-based enrichment workflow. As initial samples, a dilution series of both human HeLa cell and primary rat hippocampal neuron lysates was used, going down to 0.1 μg of peptide starting material. The optimized workflow proved to be efficient, sensitive, and reproducible, identifying, localizing, and quantifying thousands of phosphosites from just micrograms of starting material. To further test the automated workflow in genuine biological applications, we monitored EGF-induced signaling in hippocampal neurons, starting with only 200 000 primary cells, resulting in ∼50 μg of protein material. This revealed a comprehensive phosphoproteome, showing regulation of multiple members of the MAPK pathway and reduced phosphorylation status of two glutamate receptors involved in synaptic plasticity.
A microfluidic chaotic mixer platform for cancer stem cell immunocapture and release
NASA Astrophysics Data System (ADS)
Shaner, Sebastian Wesley
Isolation of exceedingly rare and ambiguous cells, like cancer stem cells (CSCs), from a pool of other abundant cells is a daunting task primarily due to the inadequately defined properties of such cells. With phenotypes of different CSCs fairly well-defined, immunocapturing of CSCs is a desirable cell-specific capture technique. A microfluidic device is a proven candidate that offers the platform for user-constrained microenvironments that can be optimized for small-scale volumetric flow experimentation. In this study, we show how a well-known passive micromixer design (staggered herringbone mixer - SHM) can be optimized to induce maximum chaotic mixing within antibody-laced microchannels and, ultimately, promote CSC capture. The device's (Cancer Stem Cell Capture Chip - CSC3 (TM)) principle design configuration is called: Single-Walled Staggered Herringbone (SWaSH). The CSC3 (TM) was constructed of a polydimethylsiloxane (PDMS) foundation and thinly coated with an alginate hydrogel derivatized with streptavidin. The results of our work showed that the non-stickiness of alginate and antigen-specific antibodies allowed for superb target-specific cell isolation and negligible non-specific cell binding. Future engineering design directions include developing new configurations (e.g. Staggered High-Low Herringbone (SHiLoH) and offset SHiLoH) to optimize microvortex generation within the microchannels. This study's qualitative and quantitative results can help stimulate progress into refinements in device design and prospective advancements in cancer stem cell isolation and more comprehensive single-cell and cluster analysis.
Foltz, Ian N; Gunasekaran, Kannan; King, Chadwick T
2016-03-01
Since the late 1990s, the use of transgenic animal platforms has transformed the discovery of fully human therapeutic monoclonal antibodies. The first approved therapy derived from a transgenic platform--the epidermal growth factor receptor antagonist panitumumab to treat advanced colorectal cancer--was developed using XenoMouse(®) technology. Since its approval in 2006, the science of discovering and developing therapeutic monoclonal antibodies derived from the XenoMouse(®) platform has advanced considerably. The emerging array of antibody therapeutics developed using transgenic technologies is expected to include antibodies and antibody fragments with novel mechanisms of action and extreme potencies. In addition to these impressive functional properties, these antibodies will be designed to have superior biophysical properties that enable highly efficient large-scale manufacturing methods. Achieving these new heights in antibody drug discovery will ultimately bring better medicines to patients. Here, we review best practices for the discovery and bio-optimization of monoclonal antibodies that fit functional design goals and meet high manufacturing standards. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NMR-based platform for fragment-based lead discovery used in screening BRD4-targeted compounds
Yu, Jun-lan; Chen, Tian-tian; Zhou, Chen; Lian, Fu-lin; Tang, Xu-long; Wen, Yi; Shen, Jing-kang; Xu, Ye-chun; Xiong, Bing; Zhang, Nai-xia
2016-01-01
Aim: Fragment-based lead discovery (FBLD) is a complementary approach in drug research and development. In this study, we established an NMR-based FBLD platform that was used to screen novel scaffolds targeting human bromodomain of BRD4, and investigated the binding interactions between hit compounds and the target protein. Methods: 1D NMR techniques were primarily used to generate the fragment library and to screen compounds. The inhibitory activity of hits on the first bromodomain of BRD4 [BRD4(I)] was examined using fluorescence anisotropy binding assay. 2D NMR and X-ray crystallography were applied to characterize the binding interactions between hit compounds and the target protein. Results: An NMR-based fragment library containing 539 compounds was established, which were clustered into 56 groups (8–10 compounds in each group). Eight hits with new scaffolds were found to inhibit BRD4(I). Four out of the 8 hits (compounds 1, 2, 8 and 9) had IC50 values of 100–260 μmol/L, demonstrating their potential for further BRD4-targeted hit-to-lead optimization. Analysis of the binding interactions revealed that compounds 1 and 2 shared a common quinazolin core structure and bound to BRD4(I) in a non-acetylated lysine mimetic mode. Conclusion: An NMR-based platform for FBLD was established and used in discovery of BRD4-targeted compounds. Four potential hit-to-lead optimization candidates have been found, two of them bound to BRD4(I) in a non-acetylated lysine mimetic mode, being selective BRD4(I) inhibitors. PMID:27238211
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.
2016-12-01
The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.
Fortuna, Lorena M; Diyamandoglu, Vasil
2017-08-01
Product reuse in the solid waste management sector is promoted as one of the key strategies for waste prevention. This practice is considered to have favorable impact on the environment, but its benefits have yet to be established. Existing research describes the perspective of "avoided production" only, but has failed to examine the interdependent nature of reuse practices within an entire solid waste management system. This study proposes a new framework that uses optimization to minimize the greenhouse gas emissions of an integrated solid waste management system that includes reuse strategies and practices such as reuse enterprises, online platforms, and materials exchanges along with traditional solid waste management practices such as recycling, landfilling, and incineration. The proposed framework uses material flow analysis in combination with an optimization model to provide the best outcome in terms of GHG emissions by redistributing product flows in the integrated solid waste management system to the least impacting routes and processes. The optimization results provide a basis for understanding the contributions of reuse to the environmental benefits of the integrated solid waste management system and the exploration of the effects of reuse activities on waste prevention. A case study involving second-hand clothing is presented to illustrate the implementation of the proposed framework as applied to the material flow. Results of the case study showed the considerable impact of reuse on GHG emissions even for small replacement rates, and helped illustrate the interdependency of the reuse sector with other waste management practices. One major contribution of this study is the development of a framework centered on product reuse that can be applied to identify the best management strategies to reduce the environmental impact of product disposal and to increase recovery of reusable products. Copyright © 2017 Elsevier Ltd. All rights reserved.
Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho
2017-11-01
High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.
Barata, David; Spennati, Giulia; Correia, Cristina; Ribeiro, Nelson; Harink, Björn; van Blitterswijk, Clemens; Habibovic, Pamela; van Rijt, Sabine
2017-09-07
Microfluidics, the science of engineering fluid streams at the micrometer scale, offers unique tools for creating and controlling gradients of soluble compounds. Gradient generation can be used to recreate complex physiological microenvironments, but is also useful for screening purposes. For example, in a single experiment, adherent cells can be exposed to a range of concentrations of the compound of interest, enabling high-content analysis of cell behaviour and enhancing throughput. In this study, we present the development of a microfluidic screening platform where, by means of diffusion, gradients of soluble compounds can be generated and sustained. This platform enables the culture of adherent cells under shear stress-free conditions, and their exposure to a soluble compound in a concentration gradient-wise manner. The platform consists of five serial cell culture chambers, all coupled to two lateral fluid supply channels that are used for gradient generation through a source-sink mechanism. Furthermore, an additional inlet and outlet are used for cell seeding inside the chambers. Finite element modeling was used for the optimization of the design of the platform and for validation of the dynamics of gradient generation. Then, as a proof-of-concept, human osteosarcoma MG-63 cells were cultured inside the platform and exposed to a gradient of Cytochalasin D, an actin polymerization inhibitor. This set-up allowed us to analyze cell morphological changes over time, including cell area and eccentricity measurements, as a function of Cytochalasin D concentration by using fluorescence image-based cytometry.
The development of optimal control laws for orbiting tethered platform systems
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Woodard, S.; Juang, J.-N.
1986-01-01
A mathematical model of the open and closed loop in-orbit plane dynamics of a space platform-tethered-subsatellite system is developed. The system consists of a rigid platform from which an (assumed massless) tether is deploying (retrieving) a subsatellite from an attachment point which is, in general, offset from the platform's mass center. A Lagrangian formulation yields equations describing platform pitch, subsatellite tether-line swing, and varying tether length motions. These equations are linearized about the nominal station keeping motion. Control can be provided by both modulation of the tether tension level and by a momentum type platform-mounted device; system controllability depends on the presence of both control inputs. Stability criteria are developed in terms of the control law gains, the platform inertia ratio, and tether offset parameter. Control law gains are obtained based on linear quadratic regulator techniques. Typical transient responses of both the state and required control effort are presented.
The development of optimal control laws for orbiting tethered platform systems
NASA Technical Reports Server (NTRS)
Bainum, P. M.
1986-01-01
A mathematical model of the open and closed loop in orbit plane dynamics of a space platform-tethered-subsatellite system is developed. The system consists of a rigid platform from which an (assumed massless) tether is deploying (retrieving) a subsatellite from an attachment point which is, in general, offset from the platform's mass center. A Langrangian formulation yields equations describing platform pitch, subsatellite tetherline swing, and varying tether length motions. These equations are linearized about the nominal station keeping motion. Control can be provided by both modulation of the tether tension level and by a momentum type platform-mounted device; system controllability depends on the presence of both control inputs. Stability criteria are developed in terms of the control law gains, the platform inertia ratio, and tether offset parameter. Control law gains are obtained based on linear quadratic regulator techniques. Typical transient responses of both the state and required control effort are presented.
U.S. Space Station platform - Configuration technology for customer servicing
NASA Technical Reports Server (NTRS)
Dezio, Joseph A.; Walton, Barbara A.
1987-01-01
Features of the Space Station coorbiting and polar orbiting platforms (COP and POP, respectively) are described that will allow them to be configured optimally to meet mission requirements and to be assembled, serviced, and modified on-orbit. Both of these platforms were designed to permit servicing at the Shuttle using the remote manipulator system with teleoperated end effectors; EVA was planned as a backup and for unplanned payload failure modes. Station-based servicing is discussed as well as expendable launch vehicle-based servicing concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
A polymeric micro total analysis system for single-cell analysis
NASA Astrophysics Data System (ADS)
Lai, Hsuan-Hong
The advancement of microengineering has enabled the manipulation and analysis of single cells, which is critical in understanding the molecular mechanisms underlying the basic physiological functions from the point of view of modern biologists. Unfortunately, analysis of single cells remains challenging from a technical perspective, mainly because of the miniature nature of the cell and the high throughput requirements of the analysis. Lab-on-a-chip (LOC) emerges as a research field that shows great promise in this perspective. We have demonstrated a micro total analysis system (mu-TAS) combining chip-based electrophoretic separation, fluorescence detection, and a pulsed Nd:YAG laser cell lysis system, in a Poly(dimethylsiloxane) (PDMS) microfluidic analytical platform for the implementation of single-cell analysis. To accomplish the task, a polymeric microfluidic device was fabricated and UV graft polymerization surface modification techniques were used. To optimize the conditions for the surface treatment techniques, the modified surfaces of PDMS were characterized using AIR-IR spectrum and sessile water drop contact angle measurements, and in-channel surfaces were characterized by their electroosmotic flow mobility. Accurate single-cell analysis relies on rapid cell lysis and therefore an optical measure of fast cell lysis was implemented and optimized in a microscopic station. The influences of pulse energy and the location of the laser beam with respect to the cell in the microchannel were explored. The observation from the cell disruption experiments suggested that the cell lysis was enabled mainly via a thermo-mechanical instead of a plasma-mediated mechanism. Finally, after chip-based electrophoresis and a laser-induced fluorescence (LIF) detection system were incorporated with the laser lysis system in a microfluidic analytical station, a feasibility demonstration of single-cell analysis was implemented. The analytical platform exhibited the capability of fluidic transportation, optical lysis of single cells, separation, and analysis of the lysates by electrophoresis and LIF detection. In comparison with the control experiment, the migration times of the fluorescent signals for the cytosolic fluorophores were in good agreement with those for the standard fluorophores, which confirmed the feasibility of the analytical processes.
Floating Offshore WTG Integrated Load Analysis & Optimization Employing a Tuned Mass Damper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez Tsouroukdissian, Arturo; Lackner, Matt; Cross-Whiter, John
2015-09-25
Floating offshore wind turbines (FOWTs) present complex design challenges due to the coupled dynamics of the platform motion, mooring system, and turbine control systems, in response to wind and wave loading. This can lead to higher extreme and fatigue loads than a comparable fixed bottom or onshore system. Previous research[1] has shown the potential to reduced extreme and fatigue loads on FOWT using tuned mass dampers (TMD) for structural control. This project aims to reduce maximum loads using passive TMDs located at the tower top during extreme storm events, when grid supplied power for other controls systems may not bemore » available. The Alstom Haliade 6MW wind turbine is modelled on the Glosten Pelastar tension-leg platform (TLP). The primary objectives of this project are to provide a preliminary assessment of the load reduction potential of passive TMDs on real wind turbine and TLP designs.« less
Shibuta, Mayu; Tamura, Masato; Kanie, Kei; Yanagisawa, Masumi; Matsui, Hirofumi; Satoh, Taku; Takagi, Toshiyuki; Kanamori, Toshiyuki; Sugiura, Shinji; Kato, Ryuji
2018-06-09
Cellular morphology on and in a scaffold composed of extracellular matrix generally represents the cellular phenotype. Therefore, morphology-based cell separation should be interesting method that is applicable to cell separation without staining surface markers in contrast to conventional cell separation methods (e.g., fluorescence activated cell sorting and magnetic activated cell sorting). In our previous study, we have proposed a cloning technology using a photodegradable gelatin hydrogel to separate the individual cells on and in hydrogels. To further expand the applicability of this photodegradable hydrogel culture platform, we here report an image-based cell separation system imaging cell picker for the morphology-based cell separation on a photodegradable hydrogel. We have developed the platform which enables the automated workflow of image acquisition, image processing and morphology analysis, and collection of a target cells. We have shown the performance of the morphology-based cell separation through the optimization of the critical parameters that determine the system's performance, such as (i) culture conditions, (ii) imaging conditions, and (iii) the image analysis scheme, to actually clone the cells of interest. Furthermore, we demonstrated the morphology-based cloning performance of cancer cells in the mixture of cells by automated hydrogel degradation by light irradiation and pipetting. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Bastian, Nathaniel D; Brown, David; Fulton, Lawrence V; Mitchell, Robert; Pollard, Wayne; Robinson, Mark; Wilson, Ronald
2013-03-01
We utilize a mixed methods approach to provide three new, separate analyses as part of the development of the next aeromedical evacuation (MEDEVAC) platform of the Future of Vertical Lift (FVL) program. The research questions follow: RQ1) What are the optimal capabilities of a FVL MEDEVAC platform given an Afghanistan-like scenario and parameters associated with the treatment/ground evacuation capabilities in that theater?; RQ2) What are the MEDEVAC trade-off considerations associated with different aircraft engines operating under variable conditions?; RQ3) How does the additional weight of weaponizing the current MEDEVAC fleet affect range, coverage radius, and response time? We address RQ1 using discrete-event simulation based partially on qualitative assessments from the field, while RQ2 and RQ3 are based on deterministic analysis. Our results confirm previous findings that travel speeds in excess of 250 knots and ranges in excess of 300 nautical miles are advisable for the FVL platform design, thereby reducing the medical footprint in stability operations. We recommend a specific course of action regarding a potential engine bridging strategy based on deterministic analysis of endurance and altitude, and we suggest that the weaponization of the FVL MEDEVAC aircraft will have an adverse effect on coverage capability. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Xyce Parallel Electronic Simulator Users' Guide Version 6.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less
Task decomposition for a multilimbed robot to work in reachable but unorientable space
NASA Technical Reports Server (NTRS)
Su, Chau; Zheng, Yuan F.
1991-01-01
Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.
Satz, Alexander L; Hochstrasser, Remo; Petersen, Ann C
2017-04-10
To optimize future DNA-encoded library design, we have attempted to quantify the library size at which the signal becomes undetectable. To accomplish this we (i) have calculated that percent yields of individual library members following a screen range from 0.002 to 1%, (ii) extrapolated that ∼1 million copies per library member are required at the outset of a screen, and (iii) from this extrapolation predict that false negative rates will begin to outweigh the benefit of increased diversity at library sizes >10 8 . The above analysis is based upon a large internal data set comprising multiple screens, targets, and libraries; we also augmented our internal data with all currently available literature data. In theory, high false negative rates may be overcome by employing larger amounts of library; however, we argue that using more than currently reported amounts of library (≫10 nmoles) is impractical. The above conclusions may be generally applicable to other DNA encoded library platforms, particularly those platforms that do not allow for library amplification.
Task driven optimal leg trajectories in insect-scale legged microrobots
NASA Astrophysics Data System (ADS)
Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert
Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud
2018-03-01
Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.
Design optimization of highly asymmetrical layouts by 2D contour metrology
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2018-03-01
As design pitch shrinks to the resolution limit of up-to-date optical lithography technology, the Critical Dimension (CD) variation tolerance has been dramatically decreased for ensuring the functionality of device. One of critical challenges associates with the narrower CD tolerance for whole chip area is the proximity effect control on asymmetrical layout environments. To fulfill the tight CD control of complex features, the Critical Dimension Scanning Electron Microscope (CD-SEM) based measurement results for qualifying process window and establishing the Optical Proximity Correction (OPC) model become insufficient, thus 2D contour extraction technique [1-5] has been an increasingly important approach for complementing the insufficiencies of traditional CD measurement algorithm. To alleviate the long cycle time and high cost penalties for product verification, manufacturing requirements are better to be well handled at design stage to improve the quality and yield of ICs. In this work, in-house 2D contour extraction platform was established for layout design optimization of 39nm half-pitch Self-Aligned Double Patterning (SADP) process layer. Combining with the adoption of Process Variation Band Index (PVBI), the contour extraction platform enables layout optimization speedup as comparing to traditional methods. The capabilities of identifying and handling lithography hotspots in complex layout environments of 2D contour extraction platform allow process window aware layout optimization to meet the manufacturing requirements.
Thermal Analysis of a Disposable, Instrument-Free DNA Amplification Lab-on-a-Chip Platform.
Pardy, Tamás; Rang, Toomas; Tulp, Indrek
2018-06-04
Novel second-generation rapid diagnostics based on nucleic acid amplification tests (NAAT) offer performance metrics on par with clinical laboratories in detecting infectious diseases at the point of care. The diagnostic assay is typically performed within a Lab-on-a-Chip (LoC) component with integrated temperature regulation. However, constraints on device dimensions, cost and power supply inherent with the device format apply to temperature regulation as well. Thermal analysis on simplified thermal models for the device can help overcome these barriers by speeding up thermal optimization. In this work, we perform experimental thermal analysis on the simplified thermal model for our instrument-free, single-use LoC NAAT platform. The system is evaluated further by finite element modelling. Steady-state as well as transient thermal analysis are performed to evaluate the performance of a self-regulating polymer resin heating element in the proposed device geometry. Reaction volumes in the target temperature range of the amplification reaction are estimated in the simulated model to assess compliance with assay requirements. Using the proposed methodology, we demonstrated our NAAT device concept capable of performing loop-mediated isothermal amplification in the 20⁻25 °C ambient temperature range with 32 min total assay time.
Micromagnetics on high-performance workstation and mobile computational platforms
NASA Astrophysics Data System (ADS)
Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.
2015-05-01
The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.
Open-WiSe: a solar powered wireless sensor network platform.
González, Apolinar; Aquino, Raúl; Mata, Walter; Ochoa, Alberto; Saldaña, Pedro; Edwards, Arthur
2012-01-01
Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSIX Real-Time Kernel. A sleep and wake up strategy was implemented to prolong the lifetime of the wireless sensor network. This platform was developed as a tool for researchers investigating Wireless sensor network or system integrators.
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1998-01-01
In this grant, we have proposed a three-year research effort focused on developing High Performance Computation and Communication (HPCC) methodologies for structural analysis on parallel processors and clusters of workstations, with emphasis on reducing the structural design cycle time. Besides consolidating and further improving the FETI solver technology to address plate and shell structures, we have proposed to tackle the following design related issues: (a) parallel coupling and assembly of independently designed and analyzed three-dimensional substructures with non-matching interfaces, (b) fast and smart parallel re-analysis of a given structure after it has undergone design modifications, (c) parallel evaluation of sensitivity operators (derivatives) for design optimization, and (d) fast parallel analysis of mildly nonlinear structures. While our proposal was accepted, support was provided only for one year.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Graphene-bimetal plasmonic platform for ultra-sensitive biosensing
NASA Astrophysics Data System (ADS)
Tong, Jinguang; Jiang, Li; Chen, Huifang; Wang, Yiqin; Yong, Ken-Tye; Forsberg, Erik; He, Sailing
2018-03-01
A graphene-bimetal plasmonic platform for surface plasmon resonance biosensing with ultra-high sensitivity was proposed and optimized. In this hybrid configuration, graphene nanosheets was employed to effectively absorb the excitation light and serve as biomolecular recognition elements for increased adsorption of analytes. Coating of an additional Au film prevents oxidation of the Ag substrate during manufacturing process and enhances the sensitivity at the same time. Thus, a bimetal Au-Ag substrate enables improved sensing performance and promotes stability of this plasmonic sensor. In this work we optimized the number of graphene layers as well as the thickness of the Au film and the Ag substrate based on the phase-interrogation sensitivity. We found an optimized configuration consisting of 6 layers of graphene coated on a bimetal surface consisting of a 5 nm Au film and a 30 nm Ag film. The calculation results showed the configuration could achieve a phase sensitivity as high as 1 . 71 × 106 deg/RIU, which was more than 2 orders of magnitude higher than that of bimetal structure and graphene-silver structure. Due to this enhanced sensing performance, the graphene-bimetal plasmonic platform proposed in this paper is potential for ultra-sensitive plasmonic sensing.
Integrated platform for optimized solar PV system design and engineering plan set generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adeyemo, Samuel
2015-12-30
The Aurora team has developed software that allows users to quickly generate a three-dimensional model for a building, with a corresponding irradiance map, from any two-dimensional image with associated geo-coordinates. The purpose of this project is to build upon that technology by developing and distributing to solar installers a software platform that automatically retrieves engineering, financial and geographic data for a specific site, and quickly generates an optimal customer proposal and corresponding engineering plans for that site. At the end of the project, Aurora’s optimization platform would have been used to make at least one thousand proposals from at leastmore » ten unique solar installation companies, two of whom would sign economically viable contracts to use the software. Furthermore, Aurora’s algorithms would be tested to show that in at least seventy percent of cases, Aurora automatically generated a design equivalent to or better than what a human could have done manually. A ‘better’ design is one that generates more energy for the same cost, or that generates a higher return on investment, while complying with all site-specific aesthetic, electrical and spatial requirements.« less
The rationale for microcirculatory guided fluid therapy.
Ince, Can
2014-06-01
The ultimate purpose of fluid administration in states of hypovolemia is to correct cardiac output to improve microcirculatory perfusion and tissue oxygenation. Observation of the microcirculation using handheld microscopes gives insight into the nature of convective and diffusive defect in hypovolemia. The purpose of this article is to introduce a new platform for hemodynamic-targeted fluid therapy based on the correction of tissue and microcirculatory perfusion assumed to be at risk during hypovolemia. Targeting systemic hemodynamic targets and/or clinical surrogates of hypovolemia gives inadequate guarantee for the correction of tissue perfusion by fluid therapy especially in conditions of distributive shock as occur in inflammation and sepsis. Findings are presented, which support the idea that only clinical signs of hypovolemia associated with low microcirculatory flow can be expected to benefit from fluid therapy and that fluid overload causes a defect in the diffusion of oxygen transport. We hypothesized that the optimal amount of fluid needed for correction of hypovolemia is defined by a physiologically based functional microcirculatory hemodynamic platform where convection and diffusion need to be optimized. Future clinical trials using handheld microscopes able to automatically evaluate the microcirculation at the bedside will show whether such a platform will indeed optimize fluid therapy.
Immersion and dry scanner extensions for sub-10nm production nodes
NASA Astrophysics Data System (ADS)
Weichselbaum, Stefan; Bornebroek, Frank; de Kort, Toine; Droste, Richard; de Graaf, Roelof F.; van Ballegoij, Rob; Botter, Herman; McLaren, Matthew G.; de Boeij, Wim P.
2015-03-01
Progressing towards the 10nm and 7nm imaging node, pattern-placement and layer-to-layer overlay requirements keep on scaling down and drives system improvements in immersion (ArFi) and dry (ArF/KrF) scanners. A series of module enhancements in the NXT platform have been introduced; among others, the scanner is equipped with exposure stages with better dynamics and thermal control. Grid accuracy improvements with respect to calibration, setup, stability, and layout dependency tighten MMO performance and enable mix and match scanner operation. The same platform improvements also benefit focus control. Improvements in detectability and reproducibility of low contrast alignment marks enhance the alignment solution window for 10nm logic processes and beyond. The system's architecture allows dynamic use of high-order scanner optimization based on advanced actuators of projection lens and scanning stages. This enables a holistic optimization approach for the scanner, the mask, and the patterning process. Productivity scanner design modifications esp. stage speeds and optimization in metrology schemes provide lower layer costs for customers using immersion lithography as well as conventional dry technology. Imaging, overlay, focus, and productivity data is presented, that demonstrates 10nm and 7nm node litho-capability for both (immersion & dry) platforms.
Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil
NASA Technical Reports Server (NTRS)
Kaul, Upender K.; Nguyen, Nhan T.
2017-01-01
This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside of the given design space. The new design test space thus populated was evaluated by using the CFD component by determining the error between the SSL predictions and the true (CFD) solutions, which was found to be small. This demonstrates the proposed CFD-SSL methodologies for isolating the best design of the VCK-VCCTEF system, and it holds promise for quantitatively identifying best designs of flight systems, in general.
High performance GPU processing for inversion using uniform grid searches
NASA Astrophysics Data System (ADS)
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.
Integrated approach for automatic target recognition using a network of collaborative sensors.
Mahalanobis, Abhijit; Van Nevel, Alan
2006-10-01
We introduce what is believed to be a novel concept by which several sensors with automatic target recognition (ATR) capability collaborate to recognize objects. Such an approach would be suitable for netted systems in which the sensors and platforms can coordinate to optimize end-to-end performance. We use correlation filtering techniques to facilitate the development of the concept, although other ATR algorithms may be easily substituted. Essentially, a self-configuring geometry of netted platforms is proposed that positions the sensors optimally with respect to each other, and takes into account the interactions among the sensor, the recognition algorithms, and the classes of the objects to be recognized. We show how such a paradigm optimizes overall performance, and illustrate the collaborative ATR scheme for recognizing targets in synthetic aperture radar imagery by using viewing position as a sensor parameter.
Ahmed, Sameh; Alqurshi, Abdulmalik; Mohamed, Abdel-Maaboud Ismail
2018-07-01
A new robust and reliable high-performance liquid chromatography (HPLC) method with multi-criteria decision making (MCDM) approach was developed to allow simultaneous quantification of atenolol (ATN) and nifedipine (NFD) in content uniformity testing. Felodipine (FLD) was used as an internal standard (I.S.) in this study. A novel marriage between a new interactive response optimizer and a HPLC method was suggested for multiple response optimizations of target responses. An interactive response optimizer was used as a decision and prediction tool for the optimal settings of target responses, according to specified criteria, based on Derringer's desirability. Four independent variables were considered in this study: Acetonitrile%, buffer pH and concentration along with column temperature. Eight responses were optimized: retention times of ATN, NFD, and FLD, resolutions between ATN/NFD and NFD/FLD, and plate numbers for ATN, NFD, and FLD. Multiple regression analysis was applied in order to scan the influences of the most significant variables for the regression models. The experimental design was set to give minimum retention times, maximum resolution and plate numbers. The interactive response optimizer allowed prediction of optimum conditions according to these criteria with a good composite desirability value of 0.98156. The developed method was validated according to the International Conference on Harmonization (ICH) guidelines with the aid of the experimental design. The developed MCDM-HPLC method showed superior robustness and resolution in short analysis time allowing successful simultaneous content uniformity testing of ATN and NFD in marketed capsules. The current work presents an interactive response optimizer as an efficient platform to optimize, predict responses, and validate HPLC methodology with tolerable design space for assay in quality control laboratories. Copyright © 2018 Elsevier B.V. All rights reserved.
The Shock and Vibration Bulletin. Part 3: Structure Medium Interaction, Case Studies in Dynamics
NASA Technical Reports Server (NTRS)
1979-01-01
Structure and medium interactions topics are addressed. Topics include: a failure analysis of underground concrete structures subjected to blast loadings, an optimization design procedure for concrete slabs, and a discussion of the transient response of a cylindrical shell submerged in a fluid. Case studies in dynamics are presented which include an examination of a shock isolation platform for a seasparrow launcher, a discussion of hydrofoil fatigue load environments, and an investigation of the dynamic characteristics of turbine generators and low tuned foundations.
[Application of iodine metabolism analysis methods in thyroid diseases].
Han, Jian-hua; Qiu, Ling
2013-08-01
The main physiological role of iodine in the body is to synthesize thyroid hormone. Both iodine deficiency and iodine excess can lead to severe thyroid diseases. While its role in thyroid diseases has increasingly been recognized, few relevant platforms and techniques for iodine detection have been available in China. This paper summarizes the advantages and disadvantages of currently iodine detection methods including direct titration, arsenic cerium catalytic spectrophotometry, chromatography with pulsed amperometry, colorimetry based on automatic biochemistry, inductively coupled plasma mass spectrometry, so as to optimize the iodine nutrition for patients with thyroid diseases.
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Simulations of ultrafast x-ray laser experiments
NASA Astrophysics Data System (ADS)
Fortmann-Grote, C.; Andreev, A. A.; Appel, K.; Branco, J.; Briggs, R.; Bussmann, M.; Buzmakov, A.; Garten, M.; Grund, A.; Huebl, A.; Jurek, Z.; Loh, N. D.; Nakatsutsumi, M.; Samoylova, L.; Santra, R.; Schneidmiller, E. A.; Sharma, A.; Steiniger, K.; Yakubov, S.; Yoon, C. H.; Yurkov, M. V.; Zastrau, U.; Ziaja-Motyka, B.; Mancuso, A. P.
2017-06-01
Simulations of experiments at modern light sources, such as optical laser laboratories, synchrotrons, and free electron lasers, become increasingly important for the successful preparation, execution, and analysis of these experiments investigating ever more complex physical systems, e.g. biomolecules, complex materials, and ultra-short lived states of matter at extreme conditions. We have implemented a platform for complete start-to-end simulations of various types of photon science experiments, tracking the radiation from the source through the beam transport optics to the sample or target under investigation, its interaction with and scattering from the sample, and registration in a photon detector. This tool allows researchers and facility operators to simulate their experiments and instruments under real life conditions, identify promising and unattainable regions of the parameter space and ultimately make better use of valuable beamtime. In this paper, we present an overview about status and future development of the simulation platform and discuss three applications: 1.) Single-particle imaging of biomolecules using x-ray free electron lasers and optimization of x-ray pulse properties, 2.) x-ray scattering diagnostics of hot dense plasmas in high power laser-matter interaction and identification of plasma instabilities, and 3.) x-ray absorption spectroscopy in warm dense matter created by high energy laser-matter interaction and pulse shape optimization for low-isentrope dynamic compression.
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
Lazaris, Charalampos; Kelly, Stephen; Ntziachristos, Panagiotis; Aifantis, Iannis; Tsirigos, Aristotelis
2017-01-05
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies. To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available. HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Optimization of Actuating Origami Networks
NASA Astrophysics Data System (ADS)
Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Joo, James; Gregory, Reich; Vaia, Richard
2015-03-01
Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. By leveraging design concepts from action origami, a subset of origami art focused on kinematic mechanisms, reversible folding patterns for applications such as solar array packaging, tunable antennae, and deployable sensing platforms may be designed. However, the enormity of the design space and the need to identify the requisite actuation forces within the structure places a severe limitation on design strategies based on intuition and geometry alone. The present work proposes a topology optimization method, using truss and frame element analysis, to distribute foldline mechanical properties within a reference crease pattern. Known actuating patterns are placed within a reference grid and the optimizer adjusts the fold stiffness of the network to optimally connect them. Design objectives may include a target motion, stress level, or mechanical energy distribution. Results include the validation of known action origami structures and their optimal connectivity within a larger network. This design suite offers an important step toward systematic incorporation of origami design concepts into new, novel and reconfigurable engineering devices. This research is supported under the Air Force Office of Scientific Research (AFOSR) funding, LRIR 13RQ02COR.
Benson, Neil; van der Graaf, Piet H; Peletier, Lambertus A
2017-11-15
A key element of the drug discovery process is target selection. Although the topic is subject to much discussion and experimental effort, there are no defined quantitative rules around optimal selection. Often 'rules of thumb', that have not been subject to rigorous exploration, are used. In this paper we explore the 'rule of thumb' notion that the molecule that initiates a pathway signal is the optimal target. Given the multi-factorial and complex nature of this question, we have simplified an example pathway to its logical minimum of two steps and used a mathematical model of this to explore the different options in the context of typical small and large molecule drugs. In this paper, we report the conclusions of our analysis and describe the analysis tool and methods used. These provide a platform to enable a more extensive enquiry into this important topic. Copyright © 2017 Elsevier B.V. All rights reserved.
Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.
2010-01-01
Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Jae-ik; Yoo, SeungHoon; Cho, Sungho
Purpose: The significant issue of particle therapy such as proton and carbon ion was a accurate dose delivery from beam line to patient. For designing the complex delivery system, Monte Carlo simulation can be used for the simulation of various physical interaction in scatters and filters. In this report, we present the development of Monte Carlo simulation platform to help design the prototype of particle therapy nozzle and performed the Monte Carlo simulation using Geant4. Also we show the prototype design of particle therapy beam nozzle for Korea Heavy Ion Medical Accelerator (KHIMA) project in Korea Institute of Radiological andmore » Medical Science(KIRAMS) at Republic of Korea. Methods: We developed a simulation platform for particle therapy beam nozzle using Geant4. In this platform, the prototype nozzle design of Scanning system for carbon was simply designed. For comparison with theoretic beam optics, the beam profile on lateral distribution at isocenter is compared with Mont Carlo simulation result. From the result of this analysis, we can expected the beam spot property of KHIMA system and implement the spot size optimization for our spot scanning system. Results: For characteristics study of scanning system, various combination of the spot size from accerlator with ridge filter and beam monitor was tested as simple design for KHIMA dose delivery system. Conclusion: In this report, we presented the part of simulation platform and the characteristics study. This study is now on-going in order to develop the simulation platform including the beam nozzle and the dose verification tool with treatment planning system. This will be presented as soon as it is become available.« less
Analysis and design of a high power, digitally-controlled spacecraft power system
NASA Technical Reports Server (NTRS)
Lee, F. C.; Cho, B. H.
1990-01-01
The progress to date on the analysis and design of a high power, digitally controlled spacecraft power system is described. Several battery discharger topologies were compared for use in the space platform application. Updated information has been provided on the battery voltage specification. Initially it was thought to be in the 30 to 40 V range. It is now specified to be 53 V to 84 V. This eliminated the tapped-boost and the current-fed auto-transformer converters from consideration. After consultations with NASA, it was decided to trade-off the following topologies: (1) boost converter; (2) multi-module, multi-phase boost converter; and (3) voltage-fed push-pull with auto-transformer. A non-linear design optimization software tool was employed to facilitate an objective comparison. Non-linear design optimization insures that the best design of each topology is compared. The results indicate that a four-module, boost converter with each module operating 90 degrees out of phase is the optimum converter for the space platform. Large-signal and small-signal models were generated for the shunt, charger, discharger, battery, and the mode controller. The models were first tested individually according to the space platform power system specifications supplied by NASA. The effect of battery voltage imbalance on parallel dischargers was investigated with respect to dc and small-signal responses. Similarly, the effects of paralleling dischargers and chargers were also investigated. A solar array and shunt model was included in these simulations. A model for the bus mode controller (power control unit) was also developed to interface the Orbital replacement Unit (ORU) model to the platform power system. Small signal models were used to generate the bus impedance plots in the various operating modes. The large signal models were integrated into a system model, and time domain simulations were performed to verify bus regulation during mode transitions. Some changes have subsequently been incorporated into the models. The changes include the use of a four module boost discharger, and a new model for the mode controller, which includes the effects of saturation. The new simulations for the boost discharger show the improvement in bus ripple that can be achieved by phase-shifted operation of each of the boost modules.
Publications | Integrated Energy Solutions | NREL
Publications 2018 Federal Tax Incentives for Energy Storage Systems Solar Plus: Optimization of Distributed Resiliency REopt: A Platform for Energy System Integration and Optimization Solar Plus: A Holistic Approach Barriers for Residential Solar Photovoltaics with Energy Storage 2016 Quality Assurance Framework for Mini
OptFlux: an open-source software platform for in silico metabolic engineering.
Rocha, Isabel; Maia, Paulo; Evangelista, Pedro; Vilaça, Paulo; Soares, Simão; Pinto, José P; Nielsen, Jens; Patil, Kiran R; Ferreira, Eugénio C; Rocha, Miguel
2010-04-19
Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models.
OptFlux: an open-source software platform for in silico metabolic engineering
2010-01-01
Background Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. Results OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. Conclusions The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models. PMID:20403172
Study on the application of mobile internet cloud computing platform
NASA Astrophysics Data System (ADS)
Gong, Songchun; Fu, Songyin; Chen, Zheng
2012-04-01
The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.
Scalable Rapidly Deployable Convex Optimization for Data Analytics
SOCPs , SDPs, exponential cone programs, and power cone programs. CVXPY supports basic methods for distributed optimization, on...multiple heterogenous platforms. We have also done basic research in various application areas , using CVXPY , to demonstrate its usefulness. See attached report for publication information....Over the period of the contract we have developed the full stack for wide use of convex optimization, in machine learning and many other areas .
Jin, Qing; Jiao, Chunyan; Sun, Shiwei; Song, Cheng; Cai, Yongping; Lin, Yi; Fan, Honghong; Zhu, Yanfang
2016-01-01
Metabolomics technology has enabled an important method for the identification and quality control of Traditional Chinese Medical materials. In this study, we isolated metabolites from cultivated Dendrobium officinale and Dendrobium huoshanense stems of different growth years in the methanol/water phase and identified them using gas chromatography coupled with mass spectrometry (GC-MS). First, a metabolomics technology platform for Dendrobium was constructed. The metabolites in the Dendrobium methanol/water phase were mainly sugars and glycosides, amino acids, organic acids, alcohols. D. officinale and D. huoshanense and their growth years were distinguished by cluster analysis in combination with multivariate statistical analysis, including principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA). Eleven metabolites that contributed significantly to this differentiation were subjected to t-tests (P<0.05) to identify biomarkers that discriminate between D. officinale and D. huoshanense, including sucrose, glucose, galactose, succinate, fructose, hexadecanoate, oleanitrile, myo-inositol, and glycerol. Metabolic profiling of the chemical compositions of Dendrobium species revealed that the polysaccharide content of D. huoshanense was higher than that of D. officinale, indicating that the D. huoshanense was of higher quality. Based on the accumulation of Dendrobium metabolites, the optimal harvest time for Dendrobium was in the third year. This initial metabolic profiling platform for Dendrobium provides an important foundation for the further study of secondary metabolites (pharmaceutical active ingredients) and metabolic pathways. PMID:26752292
Jin, Qing; Jiao, Chunyan; Sun, Shiwei; Song, Cheng; Cai, Yongping; Lin, Yi; Fan, Honghong; Zhu, Yanfang
2016-01-01
Metabolomics technology has enabled an important method for the identification and quality control of Traditional Chinese Medical materials. In this study, we isolated metabolites from cultivated Dendrobium officinale and Dendrobium huoshanense stems of different growth years in the methanol/water phase and identified them using gas chromatography coupled with mass spectrometry (GC-MS). First, a metabolomics technology platform for Dendrobium was constructed. The metabolites in the Dendrobium methanol/water phase were mainly sugars and glycosides, amino acids, organic acids, alcohols. D. officinale and D. huoshanense and their growth years were distinguished by cluster analysis in combination with multivariate statistical analysis, including principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA). Eleven metabolites that contributed significantly to this differentiation were subjected to t-tests (P<0.05) to identify biomarkers that discriminate between D. officinale and D. huoshanense, including sucrose, glucose, galactose, succinate, fructose, hexadecanoate, oleanitrile, myo-inositol, and glycerol. Metabolic profiling of the chemical compositions of Dendrobium species revealed that the polysaccharide content of D. huoshanense was higher than that of D. officinale, indicating that the D. huoshanense was of higher quality. Based on the accumulation of Dendrobium metabolites, the optimal harvest time for Dendrobium was in the third year. This initial metabolic profiling platform for Dendrobium provides an important foundation for the further study of secondary metabolites (pharmaceutical active ingredients) and metabolic pathways.
Cordero, Chiara; Rubiolo, Patrizia; Reichenbach, Stephen E; Carretta, Andrea; Cobelli, Luigi; Giardina, Matthew; Bicchi, Carlo
2017-01-13
The possibility to transfer methods from thermal to differential-flow modulated comprehensive two-dimensional gas chromatographic (GC×GC) platforms is of high interest to improve GC×GC flexibility and increase the compatibility of results from different platforms. The principles of method translation are here applied to an original method, developed for a loop-type thermal modulated GC×GC-MS/FID system, suitable for quali-quantitative screening of suspected fragrance allergens. The analysis conditions were translated to a reverse-injection differential flow modulated platform (GC×2GC-MS/FID) with a dual-parallel secondary column and dual detection. The experimental results, for a model mixture of suspected volatile allergens and for raw fragrance mixtures of different composition, confirmed the feasibility of translating methods by preserving 1 D elution order, as well as the relative alignment of resulting 2D peak patterns. A correct translation produced several benefits including an effective transfer of metadata (compound names, MS fragmentation pattern, response factors) by automatic template transformation and matching from the original/reference method to its translated counterpart. The correct translation provided: (a) 2D pattern repeatability, (b) MS fragmentation pattern reliability for identity confirmation, and (c) comparable response factors and quantitation accuracy within a concentration range of three orders of magnitude. The adoption of a narrow bore (i.e. 0.1mm d c ) first-dimension column to operate under close-to-optimal conditions with the differential-flow modulation GC×GC platform was also advantageous in halving the total analysis under the translated conditions. Copyright © 2016 Elsevier B.V. All rights reserved.
An integrated biotechnology platform for developing sustainable chemical processes.
Barton, Nelson R; Burgard, Anthony P; Burk, Mark J; Crater, Jason S; Osterhout, Robin E; Pharkya, Priti; Steer, Brian A; Sun, Jun; Trawick, John D; Van Dien, Stephen J; Yang, Tae Hoon; Yim, Harry
2015-03-01
Genomatica has established an integrated computational/experimental metabolic engineering platform to design, create, and optimize novel high performance organisms and bioprocesses. Here we present our platform and its use to develop E. coli strains for production of the industrial chemical 1,4-butanediol (BDO) from sugars. A series of examples are given to demonstrate how a rational approach to strain engineering, including carefully designed diagnostic experiments, provided critical insights about pathway bottlenecks, byproducts, expression balancing, and commercial robustness, leading to a superior BDO production strain and process.
Integrated testing system FiTest for diagnosis of PCBA
NASA Astrophysics Data System (ADS)
Bogdan, Arkadiusz; Lesniak, Adam
2016-12-01
This article presents the innovative integrated testing system FiTest for automatic, quick inspection of printed circuit board assemblies (PCBA) manufactured in Surface Mount Technology (SMT). Integration of Automatic Optical Inspection (AOI), In-Circuit Tests (ICT) and Functional Circuit Tests (FCT) resulted in universal hardware platform for testing variety of electronic circuits. The platform provides increased test coverage, decreased level of false calls and optimization of test duration. The platform is equipped with powerful algorithms performing tests in a stable and repetitive way and providing effective management of diagnosis.
Understanding the Cray X1 System
NASA Technical Reports Server (NTRS)
Cheung, Samson
2004-01-01
This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.
Live Cell in Vitro and in Vivo Imaging Applications: Accelerating Drug Discovery
Isherwood, Beverley; Timpson, Paul; McGhee, Ewan J; Anderson, Kurt I; Canel, Marta; Serrels, Alan; Brunton, Valerie G; Carragher, Neil O
2011-01-01
Dynamic regulation of specific molecular processes and cellular phenotypes in live cell systems reveal unique insights into cell fate and drug pharmacology that are not gained from traditional fixed endpoint assays. Recent advances in microscopic imaging platform technology combined with the development of novel optical biosensors and sophisticated image analysis solutions have increased the scope of live cell imaging applications in drug discovery. We highlight recent literature examples where live cell imaging has uncovered novel insight into biological mechanism or drug mode-of-action. We survey distinct types of optical biosensors and associated analytical methods for monitoring molecular dynamics, in vitro and in vivo. We describe the recent expansion of live cell imaging into automated target validation and drug screening activities through the development of dedicated brightfield and fluorescence kinetic imaging platforms. We provide specific examples of how temporal profiling of phenotypic response signatures using such kinetic imaging platforms can increase the value of in vitro high-content screening. Finally, we offer a prospective view of how further application and development of live cell imaging technology and reagents can accelerate preclinical lead optimization cycles and enhance the in vitro to in vivo translation of drug candidates. PMID:24310493
Li, Zhijun; Munro, Kim; Narouz, Mina R; Lau, Andrew; Hao, Hongxia; Crudden, Cathleen M; Horton, J Hugh
2018-05-30
Sensor surfaces play a predominant role in the development of optical biosensor technologies for the analysis of biomolecular interactions. Thiol-based self-assembled monolayers (SAMs) on gold have been widely used as linker layers for sensor surfaces. However, the degradation of the thiol-gold bond can limit the performance and durability of such surfaces, directly impacting their performance and cost-effectiveness. To this end, a new family of materials based on N-heterocyclic carbenes (NHCs) has emerged as an alternative for surface modification, capable of self-assembling onto a gold surface with higher affinity and superior stability as compared to the thiol-based systems. Here we demonstrate three applications of NHC SAMs supporting a dextran layer as a tunable platform for developing various affinity-capture biosensor surfaces. We describe the development and testing of NHC-based dextran biosensor surfaces modified with each of streptavidin, nitrilotriacetic acid, and recombinant Protein A. These affinity-capture sensor surfaces enable oriented binding of ligands for optimal performance in biomolecular assays. Together, the intrinsic high stability and flexible design of the NHC biosensing platforms show great promise and open up exciting possibilities for future biosensing applications.
Low-cost multispectral imaging for remote sensing of lettuce health
NASA Astrophysics Data System (ADS)
Ren, David D. W.; Tripathi, Siddhant; Li, Larry K. B.
2017-01-01
In agricultural remote sensing, unmanned aerial vehicle (UAV) platforms offer many advantages over conventional satellite and full-scale airborne platforms. One of the most important advantages is their ability to capture high spatial resolution images (1-10 cm) on-demand and at different viewing angles. However, UAV platforms typically rely on the use of multiple cameras, which can be costly and difficult to operate. We present the development of a simple low-cost imaging system for remote sensing of crop health and demonstrate it on lettuce (Lactuca sativa) grown in Hong Kong. To identify the optimal vegetation index, we recorded images of both healthy and unhealthy lettuce, and used them as input in an expectation maximization cluster analysis with a Gaussian mixture model. Results from unsupervised and supervised clustering show that, among four widely used vegetation indices, the blue wide-dynamic range vegetation index is the most accurate. This study shows that it is readily possible to design and build a remote sensing system capable of determining the health status of lettuce at a reasonably low cost (
An Extensible Sensing and Control Platform for Building Energy Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Anthony; Berges, Mario; Martin, Christopher
2016-04-03
The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.
ERIC Educational Resources Information Center
Chen, Yixing
2013-01-01
The objective of this study was to develop a "Virtual Design Studio (VDS)": a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects,…
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
Markovic, Stacey; Belz, Jodi; Kumar, Rajiv; Cormack, Robert A; Sridhar, Srinivas; Niedre, Mark
2016-01-01
Drug loaded implants are a new, versatile technology platform to deliver a localized payload of drugs for various disease models. One example is the implantable nanoplatform for chemo-radiation therapy where inert brachytherapy spacers are replaced by spacers doped with nanoparticles (NPs) loaded with chemotherapeutics and placed directly at the disease site for long-term localized drug delivery. However, it is difficult to directly validate and optimize the diffusion of these doped NPs in in vivo systems. To better study this drug release and diffusion, we developed a custom macroscopic fluorescence imaging system to visualize and quantify fluorescent NP diffusion from spacers in vivo. To validate the platform, we studied the release of free fluorophores, and 30 nm and 200 nm NPs conjugated with the same fluorophores as a model drug, in agar gel phantoms in vitro and in mice in vivo. Our data verified that the diffusion volume was NP size-dependent in all cases. Our near-infrared imaging system provides a method by which NP diffusion from implantable nanoplatform for chemo-radiation therapy spacers can be systematically optimized (eg, particle size or charge) thereby improving treatment efficacy of the platform.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design
NASA Technical Reports Server (NTRS)
Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian
2012-01-01
We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.
Open-WiSe: A Solar Powered Wireless Sensor Network Platform
González, Apolinar; Aquino, Raúl; Mata, Walter; Ochoa, Alberto; Saldaña, Pedro; Edwards, Arthur
2012-01-01
Because battery-powered nodes are required in wireless sensor networks and energy consumption represents an important design consideration, alternate energy sources are needed to provide more effective and optimal function. The main goal of this work is to present an energy harvesting wireless sensor network platform, the Open Wireless Sensor node (WiSe). The design and implementation of the solar powered wireless platform is described including the hardware architecture, firmware, and a POSIX Real-Time Kernel. A sleep and wake up strategy was implemented to prolong the lifetime of the wireless sensor network. This platform was developed as a tool for researchers investigating Wireless sensor network or system integrators. PMID:22969396
Creating an Effective Multi-Domain Wide-Area Surveillance Platform to Enhance Border Security
2008-03-01
SWOT ANALYSIS ........................................................................................43 I. ANALYSIS OF PROS AND CONS ...weaknesses and opportunities and SWOT analysis was also used to build pros and cons for the platform. All the interviewees liked unmanned platforms...because of the reduced night hour’s operations and SWOT analysis was also used to build pros and cons for the platform. All the interviewees really
Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.
2014-01-01
Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410
Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M
2015-01-01
Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.
Micro Asteroid Prospector Powered by Energetic Radioisotopes: MAPPER
NASA Astrophysics Data System (ADS)
Howe, Steven D.; Jackson, Gerald P.
2005-02-01
The solar system is an almost limitless store-house of resources. As humanity begins to expand into space, we can greatly reduce the cost and effort of exploration by using the resources from other orbiting bodies. The ability to extract volatile gases or structural materials from moons and other planetesimals will allow smaller ships, faster missions, and lower costs. Part of the problem, however, will be to locate the desired deposits from the billions of square miles of surface area present in the solar system. The asteroid belt between Mars and Jupiter is perhaps the most valuable and most overlooked of resource deposits in the solar system. The total mass of the Belt is estimated to be 1/1000 the mass of the Earth. The ultimate goal of this project is to identify and investigate an exploration architecture that would allow a hundreds of ultra-light-weight instrument packages to be sent to the Asteroid Belt. We have performed a preliminary analysis that has characterized the bodies in the Asteroid Belt, identified subsystems needed on the platform, and completed a preliminary optimization of the flight profile and propulsion characteristics to maximize the number of bodies that could be catalogued. The results showed that the mass and power of the platform is strongly dependent upon the average cruise velocity, the specific impulse of the thruster, and the time to accelerate up to speed. The preliminary optimization indicates that the best cruise velocity is around 0.5 km/s and the best Isp is 1500 s. Our conclusion is that platforms with near 100 kg total mass could be built relatively inexpensively. This many spacecraft would catalogue an area equivalent to 20% the area of the Earth's surface in a 20 year period.
Fogel, Mina; Harari, Ayelet; Müller-Holzner, Elisabeth; Zeimet, Alain G; Moldenhauer, Gerhard; Altevogt, Peter
2014-06-25
The L1 cell adhesion molecule (L1CAM) is overexpressed in many human cancers and can serve as a biomarker for prognosis in most of these cancers (including type I endometrial carcinomas). Here we provide an optimized immunohistochemical staining procedure for a widely used automated platform (VENTANA™), which has recourse to commercially available primary antibody and detection reagents. In parallel, we optimized the staining on a semi-automated BioGenix (i6000) immunostainer. These protocols yield good stainings and should represent the basis for a reliable and standardized immunohistochemical detection of L1CAM in a variety of malignancies in different laboratories.
Lee, Jechan; Jung, Jong-Min; Oh, Jeong-Ik; Sik Ok, Yong; Kwon, Eilhann E
2017-10-01
To establish a green platform for biodiesel production, this study mainly investigates pseudo-catalytic (non-catalytic) transesterification of olive oil. To this end, biochar from agricultural waste (maize residue) and dimethyl carbonate (DMC) as an acyl acceptor were used for pseudo-catalytic transesterification reaction. Reaction parameters (temperature and molar ratio of DMC to olive oil) were also optimized. The biodiesel yield reached up to 95.4% under the optimal operational conditions (380°C and molar ratio of DMC to olive oil (36:1)). The new sustainable environmentally benign biodiesel production introduced in this study is greener and faster than conventional transesterification reactions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Applications of thin-film sandwich crystallization platforms.
Axford, Danny; Aller, Pierre; Sanchez-Weatherby, Juan; Sandy, James
2016-04-01
Examples are shown of protein crystallization in, and data collection from, solutions sandwiched between thin polymer films using vapour-diffusion and batch methods. The crystallization platform is optimal for both visualization and in situ data collection, with the need for traditional harvesting being eliminated. In wells constructed from the thinnest plastic and with a minimum of aqueous liquid, flash-cooling to 100 K is possible without significant ice formation and without any degradation in crystal quality. The approach is simple; it utilizes low-cost consumables but yields high-quality data with minimal sample intervention and, with the very low levels of background X-ray scatter that are observed, is optimal for microcrystals.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
[Modeling and analysis of volume conduction based on field-circuit coupling].
Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming
2012-08-01
Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.
Evolutionary space platform concept study. Volume 2, part B: Manned space platform concepts
NASA Technical Reports Server (NTRS)
1982-01-01
Logical, cost-effective steps in the evolution of manned space platforms are investigated and assessed. Tasks included the analysis of requirements for a manned space platform, identifying alternative concepts, performing system analysis and definition of the concepts, comparing the concepts and performing programmatic analysis for a reference concept.
Constructing graph models for software system development and analysis
NASA Astrophysics Data System (ADS)
Pogrebnoy, Andrey V.
2017-01-01
We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
Moore, J A; Nemat-Gorgani, M; Madison, A C; Sandahl, M A; Punnamaraju, S; Eckhardt, A E; Pollack, M G; Vigneault, F; Church, G M; Fair, R B; Horowitz, M A; Griffin, P B
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols.
Moore, J. A.; Nemat-Gorgani, M.; Madison, A. C.; Punnamaraju, S.; Eckhardt, A. E.; Pollack, M. G.; Church, G. M.; Fair, R. B.; Horowitz, M. A.; Griffin, P. B.
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols. PMID:28191268
Small unmanned aircraft system for remote contour mapping of a nuclear radiation field
NASA Astrophysics Data System (ADS)
Guss, Paul; McCall, Karen; Malchow, Russell; Fischer, Rick; Lukens, Michael; Adan, Mark; Park, Ki; Abbott, Roy; Howard, Michael; Wagner, Eric; Trainham, Clifford P.; Luke, Tanushree; Mukhopadhyay, Sanjoy; Oh, Paul; Brahmbhatt, Pareshkumar; Henderson, Eric; Han, Jinlu; Huang, Justin; Huang, Casey; Daniels, Jon
2017-09-01
For nuclear disasters involving radioactive contamination, small unmanned aircraft systems (sUASs) equipped with nuclear radiation detection and monitoring capability can be very important tools. Among the advantages of a sUAS are quick deployment, low-altitude flying that enhances sensitivity, wide area coverage, no radiation exposure health safety restriction, and the ability to access highly hazardous or radioactive areas. Additionally, the sUAS can be configured with the nuclear detecting sensor optimized to measure the radiation associated with the event. In this investigation, sUAS platforms were obtained for the installation of sensor payloads for radiation detection and electro-optical systems that were specifically developed for sUAS research, development, and operational testing. The sensor payloads were optimized for the contour mapping of a nuclear radiation field, which will result in a formula for low-cost sUAS platform operations with built-in formation flight control. Additional emphases of the investigation were to develop the relevant contouring algorithms; initiate the sUAS comprehensive testing using the Unmanned Systems, Inc. (USI) Sandstorm platforms and other acquired platforms; and both acquire and optimize the sensors for detection and localization. We demonstrated contour mapping through simulation and validated waypoint detection. We mounted a detector on a sUAS and operated it initially in the counts per second (cps) mode to perform field and flight tests to demonstrate that the equipment was functioning as designed. We performed ground truth measurements to determine the response of the detector as a function of source-to-detector distance. Operation of the radiation detector was tested using different unshielded sources.
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
Coordinated control of a space manipulator tested by means of an air bearing free floating platform
NASA Astrophysics Data System (ADS)
Sabatini, Marco; Gasbarri, Paolo; Palmerini, Giovanni B.
2017-10-01
A typical approach studied for the guidance of next generation space manipulators (satellites with robotic arms aimed at autonomously performing on-orbit operations) is to decouple the platform and the arm maneuvers, which are supposed to happen sequentially, mainly because of safety concerns. This control is implemented in this work as a two-stage Sequential control, where a first stage calls for the motion of the platform and the second stage calls for the motion of the manipulator. A second novel strategy is proposed, considering the platform and the manipulator as a single multibody system subject to a Coordinated control, with the goal of approaching and grasping a target spacecraft. At the scope, a region that the end effector can reach by means of the arm motion with limited reactions on the platform is identified (the so called Reaction Null workspace). The Coordinated control algorithm performs a gain modulation (finalized to a balanced contribution of the platform and arm motion) as a function of the target position within this Reaction Null map. The result is a coordinated maneuver in which the end effector moves thanks to the platform motion, predominant in a first phase, and to the arm motion, predominant when the Reaction-Null workspace is reached. In this way the collision avoidance and attitude over-control issues are automatically considered, without the need of splitting the mission in independent (and overall sub-optimal) segments. The guidance and control algorithms are first simulated by means of a multibody code, and successively tested in the lab by means of a free floating platform equipped with a robotic arm, moving frictionless on a flat granite table thanks to air bearings and on-off thrusters; the results will be discussed in terms of optimality of the fuel consumption and final accuracy.
Silva, Aleidy; Lee, Bai-Yu; Clemens, Daniel L; Kee, Theodore; Ding, Xianting; Ho, Chih-Ming; Horwitz, Marcus A
2016-04-12
Tuberculosis (TB) remains a major global public health problem, and improved treatments are needed to shorten duration of therapy, decrease disease burden, improve compliance, and combat emergence of drug resistance. Ideally, the most effective regimen would be identified by a systematic and comprehensive combinatorial search of large numbers of TB drugs. However, optimization of regimens by standard methods is challenging, especially as the number of drugs increases, because of the extremely large number of drug-dose combinations requiring testing. Herein, we used an optimization platform, feedback system control (FSC) methodology, to identify improved drug-dose combinations for TB treatment using a fluorescence-based human macrophage cell culture model of TB, in which macrophages are infected with isopropyl β-D-1-thiogalactopyranoside (IPTG)-inducible green fluorescent protein (GFP)-expressing Mycobacterium tuberculosis (Mtb). On the basis of only a single screening test and three iterations, we identified highly efficacious three- and four-drug combinations. To verify the efficacy of these combinations, we further evaluated them using a methodologically independent assay for intramacrophage killing of Mtb; the optimized combinations showed greater efficacy than the current standard TB drug regimen. Surprisingly, all top three- and four-drug optimized regimens included the third-line drug clofazimine, and none included the first-line drugs isoniazid and rifampin, which had insignificant or antagonistic impacts on efficacy. Because top regimens also did not include a fluoroquinolone or aminoglycoside, they are potentially of use for treating many cases of multidrug- and extensively drug-resistant TB. Our study shows the power of an FSC platform to identify promising previously unidentified drug-dose combinations for treatment of TB.
Lenguito, Giovanni; Chaimov, Deborah; Weitz, Jonathan R; Rodriguez-Diaz, Rayner; Rawal, Siddarth A K; Tamayo-Garcia, Alejandro; Caicedo, Alejandro; Stabler, Cherie L; Buchwald, Peter; Agarwal, Ashutosh
2017-02-28
We report the design and fabrication of a robust fluidic platform built out of inert plastic materials and micromachined features that promote optimized convective fluid transport. The platform is tested for perfusion interrogation of rodent and human pancreatic islets, dynamic secretion of hormones, concomitant live-cell imaging, and optogenetic stimulation of genetically engineered islets. A coupled quantitative fluid dynamics computational model of glucose stimulated insulin secretion and fluid dynamics was first utilized to design device geometries that are optimal for complete perfusion of three-dimensional islets, effective collection of secreted insulin, and minimization of system volumes and associated delays. Fluidic devices were then fabricated through rapid prototyping techniques, such as micromilling and laser engraving, as two interlocking parts from materials that are non-absorbent and inert. Finally, the assembly was tested for performance using both rodent and human islets with multiple assays conducted in parallel, such as dynamic perfusion, staining and optogenetics on standard microscopes, as well as for integration with commercial perfusion machines. The optimized design of convective fluid flows, use of bio-inert and non-absorbent materials, reversible assembly, manual access for loading and unloading of islets, and straightforward integration with commercial imaging and fluid handling systems proved to be critical for perfusion assay, and particularly suited for time-resolved optogenetics studies.
Autonomous self-organizing resource manager for multiple networked platforms
NASA Astrophysics Data System (ADS)
Smith, James F., III
2002-08-01
A fuzzy logic based expert system for resource management has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar autonomous naval platforms defending their group against attackers. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. This paper provides an overview of the resource manager including the four fuzzy decision trees that make up the resource manager; the fuzzy EA model; genetic algorithm based optimization; co-evolutionary data mining through gaming; and mathematical, computational and hardware based validation. Methods of automatically designing new multi-platform EA techniques are considered. The expert system runs on each defending platform rendering it an autonomous system requiring no human intervention. There is no commanding platform. Instead the platforms work cooperatively as a function of battlespace geometry; sensor data such as range, bearing, ID, uncertainty measures for sensor output; intelligence reports; etc. Computational experiments will show the defending networked platform's ability to self- organize. The platforms' ability to self-organize is illustrated through the output of the scenario generator, a software package that automates the underlying data mining problem and creates a computer movie of the platforms' interaction for evaluation.
Synthetic Nanovaccines Against Respiratory Pathogens (SYNARP). Addendum
2014-09-01
and c) block ionomer complexes (BIC) for targeted delivery of DNA (or protein) antigen to the antigen presenting cells (APCs) (Platform C). The...immune cells to elicit most efficient immune response. The proposal was focusing on achieving the following specific technical objectives: 1) Develop...muscle in a mouse (Platform B & C). ACCOMPLISHED YEAR 3 Task 1: Determine optimal antigen-containing BPN that activate dendritic cells (DCs
2013-08-01
OF FIGURES Figure 1. Three example systems composed of platforms P1, P2, and P3, and sensors SN1, SN2 , SN3, and SN4...sensors SN1, SN2 , SN3, and SN4. 4 Figure 2. An example configuration consisting of equipment derived from multiple systems. At times, it may be
Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon
2017-01-01
Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 10 2 -10 5 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing.
Seok, Youngung; Joung, Hyou-Arm; Byun, Ju-Young; Jeon, Hyo-Sung; Shin, Su Jeong; Kim, Sanghyo; Shin, Young-Beom; Han, Hyung Soo; Kim, Min-Gon
2017-01-01
Paper-based diagnostic devices have many advantages as a one of the multiple diagnostic test platforms for point-of-care (POC) testing because they have simplicity, portability, and cost-effectiveness. However, despite high sensitivity and specificity of nucleic acid testing (NAT), the development of NAT based on a paper platform has not progressed as much as the others because various specific conditions for nucleic acid amplification reactions such as pH, buffer components, and temperature, inhibitions from technical differences of paper-based device. Here, we propose a paper-based device for performing loop-mediated isothermal amplification (LAMP) with real-time simultaneous detection of multiple DNA targets. We determined the optimal chemical components to enable dry conditions for the LAMP reaction without lyophilization or other techniques. We also devised the simple paper device structure by sequentially stacking functional layers, and employed a newly discovered property of hydroxynaphthol blue fluorescence to analyze real-time LAMP signals in the paper device. This proposed platform allowed analysis of three different meningitis DNA samples in a single device with single-step operation. This LAMP-based multiple diagnostic device has potential for real-time analysis with quantitative detection of 102-105 copies of genomic DNA. Furthermore, we propose the transformation of DNA amplification devices to a simple and affordable paper system approach with great potential for realizing a paper-based NAT system for POC testing. PMID:28740546
Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Mills, Margaret G; Gallagher, Evan P
2017-01-01
Chemical-induced oxidative stress and the biochemical pathways that protect against oxidative damage are of particular interest in the field of toxicology. To rapidly identify oxidative stress-responsive gene expression changes in zebrafish, we developed a targeted panel of antioxidant genes using the Affymetrix QuantiGene Plex (QGP) platform. The genes contained in our panel include eight putative Nrf2 (Nfe2l2a)-dependent antioxidant genes (hmox1a, gstp1, gclc, nqo1, prdx1, gpx1a, sod1, sod2), a stress response gene (hsp70), an inducible DNA damage repair gene (gadd45bb), and three reference genes (actb1, gapdh, hprt1). We tested this platform on larval zebrafish exposed to tert-butyl hydroperoxide (tBHP) and cadmium (Cd), two model oxidative stressors with different modes of action, and compared our results with those obtained using the more common quantitative PCR (qPCR) method. Both methods showed that exposure to tBHP and Cd induced expression of prdx1, gstp1, and hmox1a (2- to 12-fold increase via QGP), indicative of an activated Nrf2 response in larval zebrafish. Both compounds also elicited a general stress response as reflected by elevation of hsp70 and gadd45bb, with Cd being the more potent inducer. Transient changes were observed in sod2 and gpx1a expression, whereas nqo1, an Nrf2-responsive gene in mammalian cells, was minimally affected by either tBHP or Cd chemical exposures. Developmental expression analysis of the target genes by QGP revealed marked upregulation of sod2 between 0-96hpf, and to a lesser extent, of sod1 and gstp1. Once optimized, QGP analysis of these experiments was accomplished more rapidly, using far less tissue, and at lower total costs than qPCR analysis. In summary, the QGP platform as applied to higher-throughput zebrafish studies provides a reasonable cost-effective alternative to qPCR or more comprehensive transcriptomics approaches to rapidly assess the potential for chemicals to elicit oxidative stress as a mechanism of chemical toxicity.
Optimal control analysis of Ebola disease with control strategies of quarantine and vaccination.
Ahmad, Muhammad Dure; Usman, Muhammad; Khan, Adnan; Imran, Mudassar
2016-07-13
The 2014 Ebola epidemic is the largest in history, affecting multiple countries in West Africa. Some isolated cases were also observed in other regions of the world. In this paper, we introduce a deterministic SEIR type model with additional hospitalization, quarantine and vaccination components in order to understand the disease dynamics. Optimal control strategies, both in the case of hospitalization (with and without quarantine) and vaccination are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations. Further, with the help of uncertainty and sensitivity analysis we also have identified the most sensitive parameters which effectively contribute to change the disease dynamics. We have performed mathematical analysis with numerical simulations and optimal control strategies on Ebola virus models. We used dynamical system tools with numerical simulations and optimal control strategies on our Ebola virus models. The original model, which allowed transmission of Ebola virus via human contact, was extended to include imperfect vaccination and quarantine. After the qualitative analysis of all three forms of Ebola model, numerical techniques, using MATLAB as a platform, were formulated and analyzed in detail. Our simulation results support the claims made in the qualitative section. Our model incorporates an important component of individuals with high risk level with exposure to disease, such as front line health care workers, family members of EVD patients and Individuals involved in burial of deceased EVD patients, rather than the general population in the affected areas. Our analysis suggests that in order for R 0 (i.e., the basic reproduction number) to be less than one, which is the basic requirement for the disease elimination, the transmission rate of isolated individuals should be less than one-fourth of that for non-isolated ones. Our analysis also predicts, we need high levels of medication and hospitalization at the beginning of an epidemic. Further, optimal control analysis of the model suggests the control strategies that may be adopted by public health authorities in order to reduce the impact of epidemics like Ebola.
A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela
2015-09-01
Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b
NASA Astrophysics Data System (ADS)
Braun, A.; Walter, C. A.; Parvar, K.
2016-12-01
The current platforms for collecting magnetic data include dense coverage, but low resolution traditional airborne surveys, and high resolution, but low coverage terrestrial surveys. Both platforms leave a critical observation gap between the ground surface and approximately 100m above ground elevation, which can be navigated efficiently by new technologies, such as Unmanned Aerial Vehicles (UAVs). Specifically, multi rotor UAV platforms provide the ability to sense the magnetic field in a full 3-D tensor, which increases the quality of data collected over other current platform types. Payload requirements and target requirements must be balanced to fully exploit the 3-D magnetic tensor. This study outlines the integration of a GEM Systems Cesium Vapour UAV Magnetometer, a Lightware SF-11 Laser Altimeter and a uBlox EVK-7P GPS module with a DJI s900 Multi Rotor UAV. The Cesium Magnetometer is suspended beneath the UAV platform by a cable of varying length. A set of surveys was carried out to optimize the sensor orientation, sensor cable length beneath the UAV and data collection methods of the GEM Systems Cesium Vapour UAV Magnetometer when mounted on the DJI s900. The target for these surveys is a 12 inch steam pipeline located approximately 2 feet below the ground surface. A systematic variation of cable length, sensor orientation and inclination was conducted. The data collected from the UAV magnetometer was compared to a terrestrial survey conducted with the GEM GST-19 Proton Procession Magnetometer at the same elevation, which also served a reference station. This allowed for a cross examination between the UAV system and a proven industry standard for magnetic field data collection. The surveys resulted in optimizing the above parameters based on minimizing instrument error and ensuring reliable data acquisition. The results demonstrate that optimizing the UAV magnetometer survey can yield to industry standard measurements.
Lab-on-Chip Cytometry Based on Magnetoresistive Sensors for Bacteria Detection in Milk
Fernandes, Ana C.; Duarte, Carla M.; Cardoso, Filipe A.; Bexiga, Ricardo.; Cardoso, Susana.; Freitas, Paulo P.
2014-01-01
Flow cytometers have been optimized for use in portable platforms, where cell separation, identification and counting can be achieved in a compact and modular format. This feature can be combined with magnetic detection, where magnetoresistive sensors can be integrated within microfluidic channels to detect magnetically labelled cells. This work describes a platform for in-flow detection of magnetically labelled cells with a magneto-resistive based cell cytometer. In particular, we present an example for the validation of the platform as a magnetic counter that identifies and quantifies Streptococcus agalactiae in milk. PMID:25196163
Lab-on-chip cytometry based on magnetoresistive sensors for bacteria detection in milk.
Fernandes, Ana C; Duarte, Carla M; Cardoso, Filipe A; Bexiga, Ricardo; Cardoso, Susana; Freitas, Paulo P
2014-08-21
Flow cytometers have been optimized for use in portable platforms, where cell separation, identification and counting can be achieved in a compact and modular format. This feature can be combined with magnetic detection, where magnetoresistive sensors can be integrated within microfluidic channels to detect magnetically labelled cells. This work describes a platform for in-flow detection of magnetically labelled cells with a magneto-resistive based cell cytometer. In particular, we present an example for the validation of the platform as a magnetic counter that identifies and quantifies Streptococcus agalactiae in milk.
NASA Astrophysics Data System (ADS)
Churkin, Andrey; Bialek, Janusz
2018-01-01
Development of power interconnections in Northeast Asia becomes not only engineering but also a political issue. More research institutes are involved in the Asian Super Grid initiative discussion, as well as more politicians mention power interconnection opportunities. UNESCAP started providing a platform for intragovernmental discussion of the issue. However, there are still a lack of comprehensive modern research of the Asian Super Grid. Moreover, there is no unified data base and no unified power routes concept. Therefore, this article discusses a tool for optimal power routes selection and suggest a concept of the unified data portal.
Multimodal Microchannel and Nanowell-Based Microfluidic Platforms for Bioimaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Tao; Smallwood, Chuck R.; Zhu, Ying
2017-03-30
Modern live-cell imaging approaches permit real-time visualization of biological processes. However, limitations for unicellular organism trapping, culturing and long-term imaging can preclude complete understanding of how such microorganisms respond to perturbations in their local environment or linking single-cell variability to whole population dynamics. We have developed microfluidic platforms to overcome prior technical bottlenecks to allow both chemostat and compartmentalized cellular growth conditions using the same device. Additionally, a nanowell-based platform enables a high throughput approach to scale up compartmentalized imaging optimized within the microfluidic device. These channel and nanowell platforms are complementary, and both provide fine control over the localmore » environment as well as the ability to add/replace media components at any experimental time point.« less
Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.
2015-01-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632
Bunderson, Nathan E; Bingham, Jeffrey T; Sohn, M Hongchul; Ting, Lena H; Burkholder, Thomas J
2012-10-01
Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states and muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization, and stability analysis tools to provide structural insights into the neural control of movement. Copyright © 2012 John Wiley & Sons, Ltd.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann C.; Brandt, James M.; Tucker, Thomas
2011-09-01
This report provides documentation for the completion of the Sandia Level II milestone 'Develop feedback system for intelligent dynamic resource allocation to improve application performance'. This milestone demonstrates the use of a scalable data collection analysis and feedback system that enables insight into how an application is utilizing the hardware resources of a high performance computing (HPC) platform in a lightweight fashion. Further we demonstrate utilizing the same mechanisms used for transporting data for remote analysis and visualization to provide low latency run-time feedback to applications. The ultimate goal of this body of work is performance optimization in the facemore » of the ever increasing size and complexity of HPC systems.« less
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-09-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies.
Automated distribution system management for multichannel space power systems
NASA Technical Reports Server (NTRS)
Fleck, G. W.; Decker, D. K.; Graves, J.
1983-01-01
A NASA sponsored study of space power distribution system technology is in progress to develop an autonomously managed power system (AMPS) for large space power platforms. The multichannel, multikilowatt, utility-type power subsystem proposed presents new survivability requirements and increased subsystem complexity. The computer controls under development for the power management system must optimize the power subsystem performance and minimize the life cycle cost of the platform. A distribution system management philosophy has been formulated which incorporates these constraints. Its implementation using a TI9900 microprocessor and FORTH as the programming language is presented. The approach offers a novel solution to the perplexing problem of determining the optimal combination of loads which should be connected to each power channel for a versatile electrical distribution concept.
Applications of thin-film sandwich crystallization platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Axford, Danny, E-mail: danny.axford@diamond.ac.uk; Aller, Pierre; Sanchez-Weatherby, Juan
2016-03-24
Crystallization via sandwiches of thin polymer films is presented and discussed. Examples are shown of protein crystallization in, and data collection from, solutions sandwiched between thin polymer films using vapour-diffusion and batch methods. The crystallization platform is optimal for both visualization and in situ data collection, with the need for traditional harvesting being eliminated. In wells constructed from the thinnest plastic and with a minimum of aqueous liquid, flash-cooling to 100 K is possible without significant ice formation and without any degradation in crystal quality. The approach is simple; it utilizes low-cost consumables but yields high-quality data with minimal samplemore » intervention and, with the very low levels of background X-ray scatter that are observed, is optimal for microcrystals.« less
Desai, Parind M; Hogan, Rachael C; Brancazio, David; Puri, Vibha; Jensen, Keith D; Chun, Jung-Hoon; Myerson, Allan S; Trout, Bernhardt L
2017-10-05
This study provides a framework for robust tablet development using an integrated hot-melt extrusion-injection molding (IM) continuous manufacturing platform. Griseofulvin, maltodextrin, xylitol and lactose were employed as drug, carrier, plasticizer and reinforcing agent respectively. A pre-blended drug-excipient mixture was fed from a loss-in-weight feeder to a twin-screw extruder. The extrudate was subsequently injected directly into the integrated IM unit and molded into tablets. Tablets were stored in different storage conditions up to 20 weeks to monitor physical stability and were evaluated by polarized light microscopy, DSC, SEM, XRD and dissolution analysis. Optimized injection pressure provided robust tablet formulations. Tablets manufactured at low and high injection pressures exhibited the flaws of sink marks and flashing respectively. Higher solidification temperature during IM process reduced the thermal induced residual stress and prevented chipping and cracking issues. Polarized light microscopy revealed a homogeneous dispersion of crystalline griseofulvin in an amorphous matrix. DSC underpinned the effect of high tablet residual moisture on maltodextrin-xylitol phase separation that resulted in dimensional instability. Tablets with low residual moisture demonstrated long term dimensional stability. This study serves as a model for IM tablet formulations for mechanistic understanding of critical process parameters and formulation attributes required for optimal product performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Gatu Johnson, M.; Zylstra, A. B.; Bacher, A.; ...
2017-03-28
Here, this paper describes the development of a platform to study astrophysically relevant nuclear reactions using inertial-confinement fusion implosions on the OMEGA and National Ignition Facility laser facilities, with a particular focus on optimizing the implosions to study charged-particle- producing reactions. Primary requirements on the platform are high yield, for high statistics in the fusion product measurements, combined with low areal density, to allow the charged fusion products to escape. This is optimally achieved with direct-drive exploding pusher implosions using thin-glass-shell capsules. Mitigation strategies to eliminate a possible target sheath potential which would accelerate the emitted ions are discussed. Themore » potential impact of kinetic effects on the implosions is also considered. The platform is initially employed to study the complementary T(t,2n)α, T( 3He,np)α and 3He( 3He,2p)α reactions. Proof-of-principle results from the first experiments demonstrating the ability to accurately measure the energy and yields of charged particles are presented. Lessons learned from these experiments will be used in studies of other reactions. Ultimately, the goals are to explore thermonuclear reaction rates and fundamental nuclear physics in stellarlike plasma environments, and to push this new frontier of nuclear astrophysics into unique regimes not reachable through existing platforms, with thermal ion velocity distributions, plasma screening, and low reactant energies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gatu Johnson, M.; Zylstra, A. B.; Bacher, A.
Here, this paper describes the development of a platform to study astrophysically relevant nuclear reactions using inertial-confinement fusion implosions on the OMEGA and National Ignition Facility laser facilities, with a particular focus on optimizing the implosions to study charged-particle- producing reactions. Primary requirements on the platform are high yield, for high statistics in the fusion product measurements, combined with low areal density, to allow the charged fusion products to escape. This is optimally achieved with direct-drive exploding pusher implosions using thin-glass-shell capsules. Mitigation strategies to eliminate a possible target sheath potential which would accelerate the emitted ions are discussed. Themore » potential impact of kinetic effects on the implosions is also considered. The platform is initially employed to study the complementary T(t,2n)α, T( 3He,np)α and 3He( 3He,2p)α reactions. Proof-of-principle results from the first experiments demonstrating the ability to accurately measure the energy and yields of charged particles are presented. Lessons learned from these experiments will be used in studies of other reactions. Ultimately, the goals are to explore thermonuclear reaction rates and fundamental nuclear physics in stellarlike plasma environments, and to push this new frontier of nuclear astrophysics into unique regimes not reachable through existing platforms, with thermal ion velocity distributions, plasma screening, and low reactant energies.« less
[Applications of the hospital statistics management system].
Zhai, Hong; Ren, Yong; Liu, Jing; Li, You-Zhang; Ma, Xiao-Long; Jiao, Tao-Tao
2008-01-01
The Hospital Statistics Management System is built on an Office Automation Platform of Shandong provincial hospital system. Its workflow, role and popedom technologies are used to standardize and optimize the management program of statistics in the total quality control of hospital statistics. The system's applications have combined the office automation platform with the statistics management in a hospital and this provides a practical example of a modern hospital statistics management model.
Operational Site Selection for Unmanned Aircraft
2011-06-01
eliminate unsuitable areas, the Op Site Selection process must first consider landcover , terrain, and specifications for one or more UAS platforms...areas, the OSS process must first consider landcover , terrain, and specifi- cations for one or more UAS platforms. To select the most optimal sites, the...by landcover , ERDC/CERL TR-11-16 2 soil type, slope, and aspect. The individual terrain units are pre- determined and delineated by a separate
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle
Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro
2018-01-01
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.
Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier
2018-01-02
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.
Design conceptuel d'un avion blended wing body de 200 passagers
NASA Astrophysics Data System (ADS)
Ammar, Sami
The Blended Wing Body is built based on the flying wing concept and performance improvements compared to conventional aircraft. Contrariwise, most studies have focused on large aircraft and it is not sure whether the gains are the same for smaller aircraft. The main of objective is to perform the conceptual design of a BWB of 200 passengers and compare the performance obtained with a conventional aircraft equivalent in terms of payload and range. The design of the Blended Wing Body was carried out under the CEASIOM environment. This platform design suitable for conventional aircraft design has been modified and additional tools have been integrated in order to achieve the aerodynamic analysis, performance and stability of the aircraft fuselage built. A plane model is obtained in the geometric module AcBuilder CEASIOM from the design variables of a wing. Estimates of mass are made from semi- empirical formulas adapted to the geometry of the BWB and calculations centering and inertia are possible through BWB model developed in CATIA. Low fidelity methods, such as TORNADO and semi- empirical formulas are used to analyze the aerodynamic performance and stability of the aircraft. The aerodynamic results are validated using a high-fidelity analysis using FLUENT CFD software. An optimization process is implemented in order to obtain improved while maintaining a feasible design performance. It is an optimization of the plan form of the aircraft fuselage integrated with a number of passengers and equivalent to that of a A320.Les performance wing aircraft merged optimized maximum range are compared to A320 also optimized. Significant gains were observed. An analysis of the dynamics of longitudinal and lateral flight is carried out on the aircraft optimized BWB finesse and mass. This study identified the stable and unstable modes of the aircraft. Thus, this analysis has highlighted the stability problems associated with the oscillation of incidence and the Dutch roll for the absence of stabilizers.
Wind Turbine Controller to Mitigate Structural Loads on a Floating Wind Turbine Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul A.; Peiffer, Antoine; Schlipf, David
This paper summarizes the control design work that was performed to optimize the controller of a wind turbine on the WindFloat structure. The WindFloat is a semi-submersible floating platform designed to be a support structure for a multi-megawatt power-generating wind turbine. A controller developed for a bottom-fixed wind turbine configuration was modified for use when the turbine is mounted on the WindFloat platform. This results in an efficient platform heel resonance mitigation scheme. In addition several control modules, designed with a coupled linear model, were added to the fixed-bottom baseline controller. The approach was tested in a fully coupled nonlinearmore » aero-hydroelastic simulation tool in which wind and wave disturbances were modeled. This testing yielded significant improvements in platform global performance and tower-base-bending loading.« less
Simulation platform of LEO satellite communication system based on OPNET
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhang, Yong; Li, Xiaozhuo; Wang, Chuqiao; Li, Haihao
2018-02-01
For the purpose of verifying communication protocol in the low earth orbit (LEO) satellite communication system, an Optimized Network Engineering Tool (OPNET) based simulation platform is built. Using the three-layer modeling mechanism, the network model, the node model and the process model of the satellite communication system are built respectively from top to bottom, and the protocol will be implemented by finite state machine and Proto-C language. According to satellite orbit parameters, orbit files are generated via Satellite Tool Kit (STK) and imported into OPNET, and the satellite nodes move along their orbits. The simulation platform adopts time-slot-driven mode, divides simulation time into continuous time slots, and allocates slot number for each time slot. A resource allocation strategy is simulated on this platform, and the simulation results such as resource utilization rate, system throughput and packet delay are analyzed, which indicate that this simulation platform has outstanding versatility.
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data.
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B; Germain, Ronald N; Smith, Jennifer A; Simpson, Kaylene J; Martin, Scott E; Buehler, Eugen; Beuhler, Eugen; Fraser, Iain D C
2016-02-23
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment.
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B.; Germain, Ronald N.; Smith, Jennifer A.; Simpson, Kaylene J.; Martin, Scott E.; Beuhler, Eugen; Fraser, Iain D. C.
2016-01-01
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment. PMID:26902267
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Aerospace Applications of Integer and Combinatorial Optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Aerospace applications on integer and combinatorial optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem. for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Albini, Fabio; Xiaoqiu Liu; Torlasco, Camilla; Soranna, Davide; Faini, Andrea; Ciminaghi, Renata; Celsi, Ada; Benedetti, Matteo; Zambon, Antonella; di Rienzo, Marco; Parati, Gianfranco
2016-08-01
Uncontrolled hypertension is largely attributed to unsatisfactory doctor's engagement in its optimal management and to poor patients' compliance to therapeutic interventions. ICT and mobile Health solutions might improve these conditions, being widely available and providing highly effective communication strategies. To evaluate whether ICT and mobile Health tools are able to improve hypertension control by improving doctors' engagement and by increasing patients' education and involvement, and their compliance to lifestyle modification and prescribed drug therapy. In a pilot study, we have included 690 treated hypertensive patients with uncontrolled office blood pressure (BP), consecutively recruited by 9 general practitioners over 3 months. Patients were alternatively assigned to routine management based on repeated office visits or to an integrated ICT-based Patients Optimal Strategy for Treatment (POST) system including Home BP monitoring teletransmission, a dedicated web-based platform for patients' management by physicians (Misuriamo platform), and a smartphone mobile application (Eurohypertension APP, E-APP), over a follow-up of 6 months. BP values, demographic and clinical data were collected at baseline and at all follow-up visits (at least two). BP control and cardiovascular risk level have been evaluated at the beginning and at the end of the study. 89 patients did not complete the follow-up, thus data analysis was carried out in 601 of them (303 patients in the POST group and 298 in the control group). Office BP control (<;149/90 mmHg) was 40.0% in control group, and 72.3% in POST group at 6 month follow-up. At the same time Home BP control (<;135/85 mmHg average of 6 days) in POST group was 87.5%. this pilot study suggests that ICT based tools might be effective in improving hypertension management, implementing positive patients' involvement with better adherence to treatment prescriptions and providing the physicians with dynamic control of patients' home BP measurements, resulting in lesser clinical inertia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Biao; Yamaguchi, Keiichi; Fukuoka, Mayuko
To accelerate the logical drug design procedure, we created the program “NAGARA,” a plugin for PyMOL, and applied it to the discovery of small compounds called medical chaperones (MCs) that stabilize the cellular form of a prion protein (PrP{sup C}). In NAGARA, we constructed a single platform to unify the docking simulation (DS), free energy calculation by molecular dynamics (MD) simulation, and interfragment interaction energy (IFIE) calculation by quantum chemistry (QC) calculation. NAGARA also enables large-scale parallel computing via a convenient graphical user interface. Here, we demonstrated its performance and its broad applicability from drug discovery to lead optimization withmore » full compatibility with various experimental methods including Western blotting (WB) analysis, surface plasmon resonance (SPR), and nuclear magnetic resonance (NMR) measurements. Combining DS and WB, we discovered anti-prion activities for two compounds and tegobuvir (TGV), a non-nucleoside non-structural protein NS5B polymerase inhibitor showing activity against hepatitis C virus genotype 1. Binding profiles predicted by MD and QC are consistent with those obtained by SPR and NMR. Free energy analyses showed that these compounds stabilize the PrP{sup C} conformation by decreasing the conformational fluctuation of the PrP{sup C}. Because TGV has been already approved as a medicine, its extension to prion diseases is straightforward. Finally, we evaluated the affinities of the fragmented regions of TGV using QC and found a clue for its further optimization. By repeating WB, MD, and QC recursively, we were able to obtain the optimum lead structure. - Highlights: • NAGARA integrates docking simulation, molecular dynamics, and quantum chemistry. • We found many compounds, e.g., tegobuvir (TGV), that exhibit anti-prion activities. • We obtained insights into the action mechanism of TGV as a medical chaperone. • Using QC, we obtained useful information for optimization of the lead compound, TGV. • NAGARA is a convenient platform for drug discovery and lead optimization.« less
2016-05-01
reduction achieved is small due to the starting shape being near optimal. The general arrangement and x-y coordinate system are shown in Figure 23...Optimization, Vol. 28, pp. 55–68, 2004. [3] M Heller, J Calero, S Barter , RJ Wescott, J Choi. Fatigue life extension program for LAU-7 missile launcher
DOE Office of Scientific and Technical Information (OSTI.GOV)
NREL developed a free, publicly available web version of the REopt (TM) renewable energy integration and optimization platform called REopt Lite. REopt Lite recommends the optimal size and dispatch strategy for grid-connected photovoltaics (PV) and battery storage at a site. It also allows users to explore how PV and storage can increase a site's resiliency during a grid outage.
Korte, Erik A; Pozzi, Nicole; Wardrip, Nina; Ayyoubi, M Tayyeb; Jortani, Saeed A
2018-07-01
There are 13 million blood transfusions each year in the US. Limitations in the donor pool, storage capabilities, mass casualties, access in remote locations and reactivity of donors all limit the availability of transfusable blood products to patients. HBOC-201 (Hemopure®) is a second-generation glutaraldehyde-polymer of bovine hemoglobin, which can serve as an "oxygen bridge" to maintain oxygen carrying capacity while transfusion products are unavailable. Hemopure presents the advantages of extended shelf life, ambient storage, and limited reactive potential, but its extracellular location can also cause significant interference in modern laboratory analyzers similar to severe hemolysis. Observed error in 26 commonly measured analytes was determined on 4 different analytical platforms in plasma from a patient therapeutically transfused Hemopure as well as donor blood spiked with Hemopure at a level equivalent to the therapeutic loading dose (10% v/v). Significant negative error ratios >50% of the total allowable error (>0.5tAE) were reported in 23/104 assays (22.1%), positive bias of >0.5tAE in 26/104 assays (25.0%), and acceptable bias between -0.5tAE and 0.5tAE error ratio was reported in 44/104 (42.3%). Analysis failed in the presence of Hemopure in 11/104 (10.6%). Observed error is further subdivided by platform, wavelength, dilution and reaction method. Administration of Hemopure (or other hemoglobin-based oxygen carriers) presents a challenge to laboratorians tasked with analyzing patient specimens. We provide laboratorians with a reference to evaluate patient samples, select optimal analytical platforms for specific analytes, and predict possible bias beyond the 4 analytical platforms included in this study. Copyright © 2018 Elsevier B.V. All rights reserved.
Tajstra, Mateusz; Sokal, Adam; Gwóźdź, Arkadiusz; Wilczek, Marcin; Gacek, Adam; Wojciechowski, Konrad; Gadula-Gacek, Elżbieta; Adamowicz-Czoch, Elżbieta; Chłosta-Niepiekło, Katarzyna; Milewski, Krzysztof; Rozentryt, Piotr; Kalarus, Zbigniew; Gąsior, Mariusz; Poloński, Lech
2017-07-01
The number of patients with heart failure implantable cardiac electronic devices (CIEDs) is growing. Hospitalization rate in this group is very high and generates enormous costs. To avoid the need for hospital treatment, optimized monitoring and follow-up is crucial. Remote monitoring (RM) has been widely put into practice in the management of CIEDs but it may be difficult due to the presence of differences in systems provided by device manufacturers and loss of gathered data in case of device reimplantation. Additionally, conclusions derived from studies about usefulness of RM in clinical practice apply to devices coming only from a single company. An integrated monitoring platform allows for more comprehensive data analysis and interpretation. Therefore, the primary objective of Remote Supervision to Decrease Hospitalization Rate (RESULT) study is to evaluate the impact of RM on the clinical status of patients with ICDs or CRT-Ds using an integrated platform. Six hundred consecutive patients with ICDs or CRT-Ds implanted will be prospectively randomized to either a traditional or RM-based follow-up model. The primary clinical endpoint will be a composite of all-cause mortality or hospitalization for cardiovascular reasons within 12 months after randomization. The primary technical endpoint will be to construct and evaluate a unified and integrated platform for the data collected from RM devices manufactured by different companies. This manuscript describes the design and methodology of the prospective, randomized trial designed to determine whether remote monitoring using an integrated platform for different companies is safe, feasible, and efficacious (ClinicalTrials.gov Identifier: NCT02409225). © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
MIRATE: MIps RATional dEsign Science Gateway.
Busato, Mirko; Distefano, Rosario; Bates, Ferdia; Karim, Kal; Bossi, Alessandra Maria; López Vilariño, José Manuel; Piletsky, Sergey; Bombieri, Nicola; Giorgetti, Alejandro
2018-06-13
Molecularly imprinted polymers (MIPs) are high affinity robust synthetic receptors, which can be optimally synthesized and manufactured more economically than their biological equivalents (i.e. antibody). In MIPs production, rational design based on molecular modeling is a commonly employed technique. This mostly aids in (i) virtual screening of functional monomers (FMs), (ii) optimization of monomer-template ratio, and (iii) selectivity analysis. We present MIRATE, an integrated science gateway for the intelligent design of MIPs. By combining and adapting multiple state-of-the-art bioinformatics tools into automated and innovative pipelines, MIRATE guides the user through the entire process of MIPs' design. The platform allows the user to fully customize each stage involved in the MIPs' design, with the main goal to support the synthesis in the wet-laboratory. MIRATE is freely accessible with no login requirement at http://mirate.di.univr.it/. All major browsers are supported.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Patchett, John M; Lo, Li - Ta
2011-01-24
This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU-based rendering solutions when it is appropriate. For example, on remote supercomputers CPU-based rendering can offer a means of viewing data without having to offload the data or geometry onto a CPU-based visualization system. In terms of comparative performance of the CPU and CPU we believe that further optimizations of the performance of both CPU or CPU-based rendering are possible. The simulation community is currently confronting this reality as they work to port their simulations to different hardware architectures. What is interesting about CPU rendering of massive datasets is that for part two decades CPU performance has significantly outperformed CPU-based systems. Based on our advancements, evaluations and explorations we believe that CPU-based rendering has returned as one viable option for the visualization of massive datasets.« less
D3GB: An Interactive Genome Browser for R, Python, and WordPress.
Barrios, David; Prieto, Carlos
2017-05-01
Genome browsers are useful not only for showing final results but also for improving analysis protocols, testing data quality, and generating result drafts. Its integration in analysis pipelines allows the optimization of parameters, which leads to better results. New developments that facilitate the creation and utilization of genome browsers could contribute to improving analysis results and supporting the quick visualization of genomic data. D3 Genome Browser is an interactive genome browser that can be easily integrated in analysis protocols and shared on the Web. It is distributed as an R package, a Python module, and a WordPress plugin to facilitate its integration in pipelines and the utilization of platform capabilities. It is compatible with popular data formats such as GenBank, GFF, BED, FASTA, and VCF, and enables the exploration of genomic data with a Web browser.
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Mattern, Kai; Beißner, Nicole; Reichl, Stephan; Dietzel, Andreas
2018-05-01
Conventional safety and efficacy test models, such as animal experiments or static in vitro cell culture models, can often not reliably predict the most promising drug candidates. Therefore, a novel microfluidic cell culture platform, called Dynamic Micro Tissue Engineering System (DynaMiTES), was designed to allow online analysis of drugs permeating through barrier forming tissues under dynamic conditions combined with monitoring of the transepithelial electrical resistance (TEER) by electrodes optimized for homogeneous current distribution. A variety of pre-cultivated cell culture inserts can be integrated and exposed to well controlled dynamic micro flow conditions, resulting in a tightly regulated exposure of the cells to tested drugs, drug formulations and shear forces. With these qualities, the new system can provide more relevant information compared to static measurements. As a first in vitro model, a three-dimensional hemicornea construct consisting of human keratocytes (HCK-Ca) and epithelial cells (HCE-T) was successfully tested in the DynaMiTES. Thereby, we were able to demonstrate the functionality and cell compatibility of this new organ on chip test platform. The modular design of the DynaMiTES allows fast adaptation suitable for the investigation of drug permeation through other important cellular barriers. Copyright © 2017. Published by Elsevier B.V.
Teachable, high-content analytics for live-cell, phase contrast movies.
Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J
2010-09-01
CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
Using Distributed Data over HBase in Big Data Analytics Platform for Clinical Services
Zamani, Hamid
2017-01-01
Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS) using HBase (key-value NoSQL database). Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB) and a month for three billion (30 TB) indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services. PMID:29375652
Using Distributed Data over HBase in Big Data Analytics Platform for Clinical Services.
Chrimes, Dillon; Zamani, Hamid
2017-01-01
Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges of data aggregation, maintenance, integration, translation, analysis, and security/privacy. The study objective to establish an interactive BDA platform with simulated patient data using open-source software technologies was achieved by construction of a platform framework with Hadoop Distributed File System (HDFS) using HBase (key-value NoSQL database). Distributed data structures were generated from benchmarked hospital-specific metadata of nine billion patient records. At optimized iteration, HDFS ingestion of HFiles to HBase store files revealed sustained availability over hundreds of iterations; however, to complete MapReduce to HBase required a week (for 10 TB) and a month for three billion (30 TB) indexed patient records, respectively. Found inconsistencies of MapReduce limited the capacity to generate and replicate data efficiently. Apache Spark and Drill showed high performance with high usability for technical support but poor usability for clinical services. Hospital system based on patient-centric data was challenging in using HBase, whereby not all data profiles were fully integrated with the complex patient-to-hospital relationships. However, we recommend using HBase to achieve secured patient data while querying entire hospital volumes in a simplified clinical event model across clinical services.
Harnly, J.M.; Kane, J.S.
1984-01-01
The effect of the acid matrix, the measurement mode (height or area), the atomizer surface (unpyrolyzed and pyrolyzed graphite), the atomization mode (from the wall or from a platform), and the atomization temperature on the simultaneous electrothermal atomization of Co, Cr, Cu, Fe, Mn, Mo, Ni, V, and Zn was examined. The 5% HNO3 matrix gave rise to severe irreproducibility using a pyrolyzed tube unless the tube was properly "prepared". The 5% HCl matrix did not exhibit this problem, and no problems were observed with either matrix using an unpyrolized tube or a pyrolyzed platform. The 5% HCl matrix gave better sensitivities with a pyrolyzed tube but the two matrices were comparable for atomization from a platform. If Mo and V are to be analyzed with the other seven elements, a high atomization temperature (2700??C or greater) is necessary regardless of the matrix, the measurement mode, the atomization mode, or the atomizer surface. Simultaneous detection limits (peak height with pyrolyzed tube atomization) were comparable to those of conventional atomic absorption spectrometry using electrothermal atomization above 280 nm. Accuracies and precisions of ??10-15% were found in the 10 to 120 ng mL-1 range for the analysis of NBS acidified water standards.
Khalifa, Mounir A.; Alsahn, Mahmoud F.; Shaheen, Mohamed Shafik; Pinero, David P.
2017-01-01
AIM To evaluate and compare the efficacy of the astigmatic correction achieved with laser in situ keratomileusis (LASIK) in eyes with myopic astigmatism using wavefront-guided (WFG) and wavefront-optimized (WFO) ablation profiles. METHODS Prospective study included 221 eyes undergoing LASIK: 99 and 122 eyes with low and moderate myopic astigmatism (low and moderate myopia groups). Two subgroups were differentiated in each group according to the ablation profile: WFG subgroup, 109 eyes (45/64, low/moderate myopia groups) treated using the Advanced CustomVue platform (Abbott Medical Optics Inc.), and WFO subgroup, 112 eyes (54/58, low/moderate myopia groups) treated using the EX-500 platform (Alcon). Clinical outcomes were evaluated during a 6-month follow-up, including a vector analysis of astigmatic changes. RESULTS Significantly better postoperative uncorrected visual acuity and efficacy index was found in the WFG subgroups of each group (P≤0.041). Postoperative spherical equivalent and cylinder were significantly higher in WFO subgroups (P≤0.003). In moderate myopia group, a higher percentage of eyes with a postoperative cylinder ≤0.25 D was found in the WFG subgroup (90.6% vs 65.5%, P=0.002). In low and moderate myopia groups, the difference vector was significantly higher in the WFO subgroup compared to WFG (P<0.001). In moderate myopia group, the magnitude (P=0.008) and angle of error (P<0.001) were also significantly higher in the WFO subgroup. Significantly less induction of high order aberrations were found with WFG treatments in both low and moderate myopia groups (P≤0.006). CONCLUSION A more efficacious correction of myopic astigmatism providing a better visual outcome is achieved with WFG LASIK compared to WFO LASIK. PMID:28251090
NASA Astrophysics Data System (ADS)
Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.
2014-12-01
The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.
Dess, Brian W; Cardarelli, John; Thomas, Mark J; Stapleton, Jeff; Kroutil, Robert T; Miller, David; Curry, Timothy; Small, Gary W
2018-03-08
A generalized methodology was developed for automating the detection of radioisotopes from gamma-ray spectra collected from an aircraft platform using sodium-iodide detectors. Employing data provided by the U.S Environmental Protection Agency Airborne Spectral Photometric Environmental Collection Technology (ASPECT) program, multivariate classification models based on nonparametric linear discriminant analysis were developed for application to spectra that were preprocessed through a combination of altitude-based scaling and digital filtering. Training sets of spectra for use in building classification models were assembled from a combination of background spectra collected in the field and synthesized spectra obtained by superimposing laboratory-collected spectra of target radioisotopes onto field backgrounds. This approach eliminated the need for field experimentation with radioactive sources for use in building classification models. Through a bi-Gaussian modeling procedure, the discriminant scores that served as the outputs from the classification models were related to associated confidence levels. This provided an easily interpreted result regarding the presence or absence of the signature of a specific radioisotope in each collected spectrum. Through the use of this approach, classifiers were built for cesium-137 ( 137 Cs) and cobalt-60 ( 60 Co), two radioisotopes that are of interest in airborne radiological monitoring applications. The optimized classifiers were tested with field data collected from a set of six geographically diverse sites, three of which contained either 137 Cs, 60 Co, or both. When the optimized classification models were applied, the overall percentages of correct classifications for spectra collected at these sites were 99.9 and 97.9% for the 60 Co and 137 Cs classifiers, respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rossi, Stefano; Gazzola, Enrico; Capaldo, Pietro; Borile, Giulia; Romanato, Filippo
2018-05-18
Surface Plasmon Resonance (SPR)-based sensors have the advantage of being label-free, enzyme-free and real-time. However, their spreading in multidisciplinary research is still mostly limited to prism-coupled devices. Plasmonic gratings, combined with a simple and cost-effective instrumentation, have been poorly developed compared to prism-coupled system mainly due to their lower sensitivity. Here we describe the optimization and signal enhancement of a sensing platform based on phase-interrogation method, which entails the exploitation of a nanostructured sensor. This technique is particularly suitable for integration of the plasmonic sensor in a lab-on-a-chip platform and can be used in a microfluidic chamber to ease the sensing procedures and limit the injected volume. The careful optimization of most suitable experimental parameters by numerical simulations leads to a 30⁻50% enhancement of SPR response, opening new possibilities for applications in the biomedical research field while maintaining the ease and versatility of the configuration.
Rossi, Stefano; Gazzola, Enrico; Capaldo, Pietro; Borile, Giulia; Romanato, Filippo
2018-01-01
Surface Plasmon Resonance (SPR)-based sensors have the advantage of being label-free, enzyme-free and real-time. However, their spreading in multidisciplinary research is still mostly limited to prism-coupled devices. Plasmonic gratings, combined with a simple and cost-effective instrumentation, have been poorly developed compared to prism-coupled system mainly due to their lower sensitivity. Here we describe the optimization and signal enhancement of a sensing platform based on phase-interrogation method, which entails the exploitation of a nanostructured sensor. This technique is particularly suitable for integration of the plasmonic sensor in a lab-on-a-chip platform and can be used in a microfluidic chamber to ease the sensing procedures and limit the injected volume. The careful optimization of most suitable experimental parameters by numerical simulations leads to a 30–50% enhancement of SPR response, opening new possibilities for applications in the biomedical research field while maintaining the ease and versatility of the configuration. PMID:29783711
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
NASA Astrophysics Data System (ADS)
Ninos, K.; Georgiadis, P.; Cavouras, D.; Nomicos, C.
2010-05-01
This study presents the design and development of a mobile wireless platform to be used for monitoring and analysis of seismic events and related electromagnetic (EM) signals, employing Personal Digital Assistants (PDAs). A prototype custom-developed application was deployed on a 3G enabled PDA that could connect to the FTP server of the Institute of Geodynamics of the National Observatory of Athens and receive and display EM signals at 4 receiver frequencies (3 KHz (E-W, N-S), 10 KHz (E-W, N-S), 41 MHz and 46 MHz). Signals may originate from any one of the 16 field-stations located around the Greek territory. Employing continuous recordings of EM signals gathered from January 2003 till December 2007, a Support Vector Machines (SVM)-based classification system was designed to distinguish EM precursor signals within noisy background. EM-signals corresponding to recordings preceding major seismic events (Ms≥5R) were segmented, by an experienced scientist, and five features (mean, variance, skewness, kurtosis, and a wavelet based feature), derived from the EM-signals were calculated. These features were used to train the SVM-based classification scheme. The performance of the system was evaluated by the exhaustive search and leave-one-out methods giving 87.2% overall classification accuracy, in correctly identifying EM precursor signals within noisy background employing all calculated features. Due to the insufficient processing power of the PDAs, this task was performed on a typical desktop computer. This optimal trained context of the SVM classifier was then integrated in the PDA based application rendering the platform capable to discriminate between EM precursor signals and noise. System's efficiency was evaluated by an expert who reviewed 1/ multiple EM-signals, up to 18 days prior to corresponding past seismic events, and 2/ the possible EM-activity of a specific region employing the trained SVM classifier. Additionally, the proposed architecture can form a base platform for a future integrated system that will incorporate services such as notifications for field station power failures, disruption of data flow, occurring SEs, and even other types of measurement and analysis processes such as the integration of a special analysis algorithm based on the ratio of short term to long term signal average.
Kuwae, Shinobu; Miyakawa, Ichiko; Doi, Tomohiro
2018-01-11
A chemically defined platform basal medium and feed media were developed using a single Chinese hamster ovary (CHO) cell line that produces a monoclonal antibody (mAb). Cell line A, which showed a peak viable cell density of 5.9 × 10 6 cells/mL and a final mAb titer of 0.5 g/L in batch culture, was selected for the platform media development. Stoichiometrically balanced feed media were developed using glucose as an indicator of cell metabolism to determine the feed rates of all other nutrients. A fed-batch culture of cell line A using the platform fed-batch medium yielded a 6.4 g/L mAb titer, which was 12-fold higher than that of the batch culture. To examine the applicability of the platform basal medium and feed media, three other cell lines (A16, B, and C) that produce mAbs were cultured using the platform fed-batch medium, and they yielded mAb titers of 8.4, 3.3, and 6.2 g/L, respectively. The peak viable cell densities of the three cell lines ranged from 1.3 × 10 7 to 1.8 × 10 7 cells/mL. These results show that the nutritionally balanced fed-batch medium and feeds worked well for other cell lines. During the medium development, we found that choline limitation caused a lower cell viability, a lower mAb titer, a higher mAb aggregate content, and a higher mannose-5 content. The optimal choline chloride to glucose ratio for the CHO cell fed-batch culture was determined. Our platform basal medium and feed media will shorten the medium-development time for mAb-producing cell lines.
Dao, Nhu-Ngoc; Park, Minho; Kim, Joongheon; Cho, Sungrae
2017-01-01
As an important part of IoTization trends, wireless sensing technologies have been involved in many fields of human life. In cellular network evolution, the long term evolution advanced (LTE-A) networks including machine-type communication (MTC) features (named LTE-M) provide a promising infrastructure for a proliferation of Internet of things (IoT) sensing platform. However, LTE-M may not be optimally exploited for directly supporting such low-data-rate devices in terms of energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. Focusing on this circumstance, we propose a novel adaptive modulation and coding selection (AMCS) algorithm to address the energy consumption problem in the LTE-M based IoT-sensing platform. The proposed algorithm determines the optimal pair of MCS and the number of primary resource blocks (#PRBs), at which the transport block size is sufficient to packetize the sensing data within the minimum transmit power. In addition, a quantity-oriented resource planning (QORP) technique that utilizes these optimal MCS levels as main criteria for spectrum allocation has been proposed for better adapting to the sensing node requirements. The simulation results reveal that the proposed approach significantly reduces the energy consumption of IoT sensing nodes and #PRBs up to 23.09% and 25.98%, respectively.
Dao, Nhu-Ngoc; Park, Minho; Kim, Joongheon
2017-01-01
As an important part of IoTization trends, wireless sensing technologies have been involved in many fields of human life. In cellular network evolution, the long term evolution advanced (LTE-A) networks including machine-type communication (MTC) features (named LTE-M) provide a promising infrastructure for a proliferation of Internet of things (IoT) sensing platform. However, LTE-M may not be optimally exploited for directly supporting such low-data-rate devices in terms of energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. Focusing on this circumstance, we propose a novel adaptive modulation and coding selection (AMCS) algorithm to address the energy consumption problem in the LTE-M based IoT-sensing platform. The proposed algorithm determines the optimal pair of MCS and the number of primary resource blocks (#PRBs), at which the transport block size is sufficient to packetize the sensing data within the minimum transmit power. In addition, a quantity-oriented resource planning (QORP) technique that utilizes these optimal MCS levels as main criteria for spectrum allocation has been proposed for better adapting to the sensing node requirements. The simulation results reveal that the proposed approach significantly reduces the energy consumption of IoT sensing nodes and #PRBs up to 23.09% and 25.98%, respectively. PMID:28796804
Verification of ANSYS Fluent and OpenFOAM CFD platforms for prediction of impact flow
NASA Astrophysics Data System (ADS)
Tisovská, Petra; Peukert, Pavel; Kolář, Jan
The main goal of the article is a verification of the heat transfer coefficient numerically predicted by two CDF platforms - ANSYS-Fluent and OpenFOAM on the problem of impact flows oncoming from 2D nozzle. Various mesh parameters and solver settings were tested under several boundary conditions and compared to known experimental results. The best solver setting, suitable for further optimization of more complex geometry is evaluated.
NASA Astrophysics Data System (ADS)
Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin
2014-12-01
The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.
Nanomechanical recognition of prognostic biomarker suPAR with DVD-ROM optical technology.
Bache, Michael; Bosco, Filippo G; Brøgger, Anna L; Frøhling, Kasper B; Alstrøm, Tommy Sonne; Hwu, En-Te; Chen, Ching-Hsiu; Eugen-Olsen, Jesper; Hwang, Ing-Shouh; Boisen, Anja
2013-11-08
In this work the use of a high-throughput nanomechanical detection system based on a DVD-ROM optical drive and cantilever sensors is presented for the detection of urokinase plasminogen activator receptor inflammatory biomarker (uPAR). Several large scale studies have linked elevated levels of soluble uPAR (suPAR) to infectious diseases, such as HIV, and certain types of cancer. Using hundreds of cantilevers and a DVD-based platform, cantilever deflection response from antibody-antigen recognition is investigated as a function of suPAR concentration. The goal is to provide a cheap and portable detection platform which can carry valuable prognostic information. In order to optimize the cantilever response the antibody immobilization and unspecific binding are initially characterized using quartz crystal microbalance technology. Also, the choice of antibody is explored in order to generate the largest surface stress on the cantilevers, thus increasing the signal. Using optimized experimental conditions the lowest detectable suPAR concentration is currently around 5 nM. The results reveal promising research strategies for the implementation of specific biochemical assays in a portable and high-throughput microsensor-based detection platform.
He, Jiankang; Du, Yanan; Guo, Yuqi; Hancock, Matthew J.; Wang, Ben; Shin, Hyeongho; Wu, Jinhui; Li, Dichen; Khademhosseini, Ali
2010-01-01
Combinatorial material synthesis is a powerful approach for creating composite material libraries for the high-throughput screening of cell–material interactions. Although current combinatorial screening platforms have been tremendously successful in identifying target (termed “hit”) materials from composite material libraries, new material synthesis approaches are needed to further optimize the concentrations and blending ratios of the component materials. Here we employed a microfluidic platform to rapidly synthesize composite materials containing cross-gradients of gelatin and chitosan for investigating cell–biomaterial interactions. The microfluidic synthesis of the cross-gradient was optimized experimentally and theoretically to produce quantitatively controllable variations in the concentrations and blending ratios of the two components. The anisotropic chemical compositions of the gelatin/chitosan cross-gradients were characterized by Fourier transform infrared spectrometry and X-ray photoelectron spectrometry. The three-dimensional (3D) porous gelatin/chitosan cross-gradient materials were shown to regulate the cellular morphology and proliferation of smooth muscle cells (SMCs) in a gradient-dependent manner. We envision that our microfluidic cross-gradient platform may accelerate the material development processes involved in a wide range of biomedical applications. PMID:20721897
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
Analysis and Optimization of Building Energy Consumption
NASA Astrophysics Data System (ADS)
Chuah, Jun Wei
Energy is one of the most important resources required by modern human society. In 2010, energy expenditures represented 10% of global gross domestic product (GDP). By 2035, global energy consumption is expected to increase by more than 50% from current levels. The increased pace of global energy consumption leads to significant environmental and socioeconomic issues: (i) carbon emissions, from the burning of fossil fuels for energy, contribute to global warming, and (ii) increased energy expenditures lead to reduced standard of living. Efficient use of energy, through energy conservation measures, is an important step toward mitigating these effects. Residential and commercial buildings represent a prime target for energy conservation, comprising 21% of global energy consumption and 40% of the total energy consumption in the United States. This thesis describes techniques for the analysis and optimization of building energy consumption. The thesis focuses on building retrofits and building energy simulation as key areas in building energy optimization and analysis. The thesis first discusses and evaluates building-level renewable energy generation as a solution toward building energy optimization. The thesis next describes a novel heating system, called localized heating. Under localized heating, building occupants are heated individually by directed radiant heaters, resulting in a considerably reduced heated space and significant heating energy savings. To support localized heating, a minimally-intrusive indoor occupant positioning system is described. The thesis then discusses occupant-level sensing (OLS) as the next frontier in building energy optimization. OLS captures the exact environmental conditions faced by each building occupant, using sensors that are carried by all building occupants. The information provided by OLS enables fine-grained optimization for unprecedented levels of energy efficiency and occupant comfort. The thesis also describes a retrofit-oriented building energy simulator, ROBESim, that natively supports building retrofits. ROBESim extends existing building energy simulators by providing a platform for the analysis of novel retrofits, in addition to simulating existing retrofits. Using ROBESim, retrofits can be automatically applied to buildings, obviating the need for users to manually update building characteristics for comparisons between different building retrofits. ROBESim also includes several ease-of-use enhancements to support users of all experience levels.
NASA Astrophysics Data System (ADS)
Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.
2013-12-01
In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed on JUQUEEN with processor counts on the order of 10,000. The instrumentation is used in weak and strong scaling studies with real data cases and hypothetical idealized numerical experiments for detailed profiling and tracing analysis. The profiling is not only useful in identifying wait states that are due to the MPMD execution model, but also in fine-tuning resource allocation to the component models in search of the most suitable load balancing. This is especially necessary, as with numerical experiments that cover multiple (high resolution) spatial scales, the time stepping, coupling frequencies, and communication overheads are constantly shifting, which makes it necessary to re-determine the model setup with each new experimental design.
NASA Astrophysics Data System (ADS)
Chen, Daqiang; Shen, Xiahong; Tong, Bing; Zhu, Xiaoxiao; Feng, Tao
With the increasing competition in logistics industry and promotion of lower logistics costs requirements, the construction of logistics information matching platform for highway transportation plays an important role, and the accuracy of platform design is the key to successful operation or not. Based on survey results of logistics service providers, customers and regulation authorities to access to information and in-depth information demand analysis of logistics information matching platform for highway transportation in Zhejiang province, a survey analysis for framework of logistics information matching platform for highway transportation is provided.
Control and structural optimization for maneuvering large spacecraft
NASA Technical Reports Server (NTRS)
Chun, H. M.; Turner, J. D.; Yu, C. C.
1990-01-01
Presented here are the results of an advanced control design as well as a discussion of the requirements for automating both the structures and control design efforts for maneuvering a large spacecraft. The advanced control application addresses a general three dimensional slewing problem, and is applied to a large geostationary platform. The platform consists of two flexible antennas attached to the ends of a flexible truss. The control strategy involves an open-loop rigid body control profile which is derived from a nonlinear optimal control problem and provides the main control effort. A perturbation feedback control reduces the response due to the flexibility of the structure. Results are shown which demonstrate the usefulness of the approach. Software issues are considered for developing an integrated structures and control design environment.
NASA Astrophysics Data System (ADS)
Wang, Li; Wang, Jun; Bao, Dong; Yang, Rong; Yan, Qing; Gao, Fei; Hua, Dengxin
2018-01-01
All fiber Raman temperature lidar for space borne platform has been proposed for profiling of the temperature with high accuracy. Fiber Bragg grating (FBG) is proposed as the spectroscopic system of Raman lidar because of good wavelength selectivity, high spectral resolution and high out-of-band rejection rate. Two sets of FBGs at visible wavelength 532 nm as Raman spectroscopy system are designed for extracting the rotational Raman spectra of atmospheric molecules, which intensities depend on the atmospheric temperature. The optimization design of the tuning method of an all-fiber rotational Raman spectroscopy system is analyzed and tested for estimating the potential temperature inversion error caused by the instability of FBG. The cantilever structure with temperature control device is designed to realize the tuning and stabilization of the central wavelengths of FBGs. According to numerical calculation of FBG and finite element analysis of the cantilever structure, the center wavelength offset of FBG is 11.03 nm/°C with the temperature change in the spectroscopy system. By experimental observation, the center wavelength offset of surface-bonded FBG is 9.80 nm/°C with temperature changing when subjected to certain strain for the high quantum number channel, while 10.01 nm/°C for the low quantum number channel. The tunable wavelength range of FBG is from 528.707 nm to 529.014 nm for the high quantum number channel and from 530.226 nm to 530.547 nm for the low quantum number channel. The temperature control accuracy of the FBG spectroscopy system is up to 0.03 °C, the corresponding potential atmospheric temperature inversion error is 0.04 K based on the numerical analysis of all-fiber Raman temperature lidar. The fine tuning and stabilization of the FBG wavelength realize the elaborate spectroscope of Raman lidar system. The conclusion is of great significance for the application of FBG spectroscopy system for space-borne platform Raman lidar.
The Convergence of High Performance Computing and Large Scale Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.
2015-12-01
As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.
Bacterial cell-free expression technology to in vitro systems engineering and optimization.
Caschera, Filippo
2017-06-01
Cell-free expression system is a technology for the synthesis of proteins in vitro . The system is a platform for several bioengineering projects, e.g. cell-free metabolic engineering, evolutionary design of experiments, and synthetic minimal cell construction. Bacterial cell-free protein synthesis system (CFPS) is a robust tool for synthetic biology. The bacteria lysate, the DNA, and the energy module, which are the three optimized sub-systems for in vitro protein synthesis, compose the integrated system. Currently, an optimized E. coli cell-free expression system can produce up to ∼2.3 mg/mL of a fluorescent reporter protein. Herein, I will describe the features of ATP-regeneration systems for in vitro protein synthesis, and I will present a machine-learning experiment for optimizing the protein yield of E. coli cell-free protein synthesis systems. Moreover, I will introduce experiments on the synthesis of a minimal cell using liposomes as dynamic containers, and E. coli cell-free expression system as biochemical platform for metabolism and gene expression. CFPS can be further integrated with other technologies for novel applications in environmental, medical and material science.
Deployable wavelength optimizer for multi-laser sensing and communication undersea
NASA Astrophysics Data System (ADS)
Neuner, Burton; Hening, Alexandru; Pascoguin, B. Melvin; Dick, Brian; Miller, Martin; Tran, Nghia; Pfetsch, Michael
2017-05-01
This effort develops and tests algorithms and a user-portable optical system designed to autonomously optimize the laser communication wavelength in open and coastal oceans. In situ optical meteorology and oceanography (METOC) data gathered and analyzed as part of the auto-selection process can be stored and forwarded. The system performs closedloop optimization of three visible-band lasers within one minute by probing the water column via passive retroreflector and polarization optics, selecting the ideal wavelength, and enabling high-speed communication. Backscattered and stray light is selectively blocked by employing polarizers and wave plates, thus increasing the signal-to-noise ratio. As an advancement in instrumentation, we present autonomy software and portable hardware, and demonstrate this new system in two environments: ocean bay seawater and outdoor test pool freshwater. The next generation design is also presented. Once fully miniaturized, the optical payload and software will be ready for deployment on manned and unmanned platforms such as buoys and vehicles. Gathering timely and accurate ocean sensing data in situ will dramatically increase the knowledge base and capabilities for environmental sensing, defense, and industrial applications. Furthermore, communicating on the optimal channel increases transfer rates, propagation range, and mission length, all while reducing power consumption in undersea platforms.
Pain-Relieving Interventions for Retinopathy of Prematurity: A Meta-analysis.
Disher, Timothy; Cameron, Chris; Mitra, Souvik; Cathcart, Kelcey; Campbell-Yeo, Marsha
2018-06-01
Retinopathy of prematurity eye examinations conducted in the neonatal intensive care. To combine randomized trials of pain-relieving interventions for retinopathy of prematurity examinations using network meta-analysis. Systematic review and network meta-analysis of Medline, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and the World Health Organization International Clinical Trials Registry Platform. All databases were searched from inception to February 2017. Abstract and title screen and full-text screening were conducted independently by 2 reviewers. Data were extracted by 2 reviewers and pooled with random effect models if the number of trials within a comparison was sufficient. The primary outcome was pain during the examination period; secondary outcomes were pain after the examination, physiologic response, and adverse events. Twenty-nine studies ( N = 1487) were included. Topical anesthetic (TA) combined with sweet taste and an adjunct intervention (eg, nonnutritive sucking) had the highest probability of being the optimal treatment (mean difference [95% credible interval] versus TA alone = -3.67 [-5.86 to -1.47]; surface under the cumulative ranking curve = 0.86). Secondary outcomes were sparsely reported (2-4 studies, N = 90-248) but supported sweet-tasting solutions with or without adjunct interventions as optimal. Limitations included moderate heterogeneity in pain assessment reactivity phase and severe heterogeneity in the regulation phase. Multisensory interventions including sweet taste is likely the optimal treatment for reducing pain resulting from eye examinations in preterm infants. No interventions were effective in absolute terms. Copyright © 2018 by the American Academy of Pediatrics.
NASA Astrophysics Data System (ADS)
Kafle, Amol; Coy, Stephen L.; Wong, Bryan M.; Fornace, Albert J.; Glick, James J.; Vouros, Paul
2014-07-01
A systematic study involving the use and optimization of gas-phase modifiers in quantitative differential mobility-mass spectrometry (DMS-MS) analysis is presented using nucleoside-adduct biomarkers of DNA damage as an important reference point for analysis in complex matrices. Commonly used polar protic and polar aprotic modifiers have been screened for use against two deoxyguanosine adducts of DNA: N-(deoxyguanosin-8-yl)-4-aminobiphenyl (dG-C8-4-ABP) and N-(deoxyguanosin-8-y1)-2-amino-l-methyl-6-phenylimidazo[4,5-b]pyridine (dG-C8-PhIP). Particular attention was paid to compensation voltage (CoV) shifts, peak shapes, and product ion signal intensities while optimizing the DMS-MS conditions. The optimized parameters were then applied to rapid quantitation of the DNA adducts in calf thymus DNA. After a protein precipitation step, adduct levels corresponding to less than one modification in 106 normal DNA bases were detected using the DMS-MS platform. Based on DMS fundamentals and ab initio thermochemical results, we interpret the complexity of DMS modifier responses in terms of thermal activation and the development of solvent shells. At very high bulk gas temperature, modifier dipole moment may be the most important factor in cluster formation and cluster geometry, but at lower temperatures, multi-neutral clusters are important and less predictable. This work provides a useful protocol for targeted DNA adduct quantitation and a basis for future work on DMS modifier effects.
Kafle, Amol; Coy, Stephen L.; Wong, Bryan M.; Fornace, Albert J.; Glick, James J.; Vouros, Paul
2014-01-01
A systematic study involving the use and optimization of gas phase modifiers in quantitative differential mobility- mass spectrometry (DMS-MS) analysis is presented using mucleoside-adduct biomarkers of DNA damage as an important reference point for analysis in complex matrices. Commonly used polar protic and polar aprotic modifiers have been screened for use against two deoxyguanosine adducts of DNA: N-(deoxyguanosin-8-yl)-4-aminobiphenyl (dG-C8-4-ABP) and N-(deoxyguanosin-8-y1)-2-amino-l-methyl-6-phenylimidazo[4,5-b]pyridine (dG-C8-PhIP). Particular attention was paid to compensation voltage (CoV) shifts, peak shapes and product ion signal intensities while optimizing the DMS-MS conditions. The optimized parameters were then applied to rapid quantitation of the DNA adducts in calf thymus DNA. After a protein precipitation step, adduct levels corresponding to less than one modification in 106 normal DNA bases were detected using the DMS-MS platform. Based on DMS fundamentals and ab-initio thermochemical results we interpret the complexity of DMS modifier responses in terms of thermal activation and the development of solvent shells. At very high bulk gas temperature, modifier dipole moment may be the most important factor in cluster formation and cluster geometry in mobility differences, but at lower temperatures multi-neutral clusters are important and less predictable. This work provides a useful protocol for targeted DNA adduct quantitation and a basis for future work on DMS modifier effects. PMID:24452298
Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2015-09-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?
CompatPM: enabling energy efficient multimedia workloads for distributed mobile platforms
NASA Astrophysics Data System (ADS)
Nathuji, Ripal; O'Hara, Keith J.; Schwan, Karsten; Balch, Tucker
2007-01-01
The computation and communication abilities of modern platforms are enabling increasingly capable cooperative distributed mobile systems. An example is distributed multimedia processing of sensor data in robots deployed for search and rescue, where a system manager can exploit the application's cooperative nature to optimize the distribution of roles and tasks in order to successfully accomplish the mission. Because of limited battery capacities, a critical task a manager must perform is online energy management. While support for power management has become common for the components that populate mobile platforms, what is lacking is integration and explicit coordination across the different management actions performed in a variety of system layers. This papers develops an integration approach for distributed multimedia applications, where a global manager specifies both a power operating point and a workload for a node to execute. Surprisingly, when jointly considering power and QoS, experimental evaluations show that using a simple deadline-driven approach to assigning frequencies can be non-optimal. These trends are further affected by certain characteristics of underlying power management mechanisms, which in our research, are identified as groupings that classify component power management as "compatible" (VFC) or "incompatible" (VFI) with voltage and frequency scaling. We build on these findings to develop CompatPM, a vertically integrated control strategy for power management in distributed mobile systems. Experimental evaluations of CompatPM indicate average energy improvements of 8% when platform resources are managed jointly rather than independently, demonstrating that previous attempts to maximize battery life by simply minimizing frequency are inappropriate from a platform-level perspective.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
An Advanced Platform for Biomolecular Detection and Analysis Systems
2005-02-01
AFRL-IF-RS-TR-2005-54 Final Technical Report February 2005 AN ADVANCED PLATFORM FOR BIOMOLECULAR DETECTION AND ANALYSIS SYSTEMS...SUBTITLE AN ADVANCED PLATFORM FOR BIOMOLECULAR DETECTION AND ANALYSIS SYSTEMS 6. AUTHOR(S) David J. Beebe 5. FUNDING NUMBERS G...detection, analysis and response as well as many non BC warfare applications such as environmental toxicology, clinical detection and diagnosis
A three-dimensional inverse finite element analysis of the heel pad.
Chokhandre, Snehal; Halloran, Jason P; van den Bogert, Antonie J; Erdemir, Ahmet
2012-03-01
Quantification of plantar tissue behavior of the heel pad is essential in developing computational models for predictive analysis of preventive treatment options such as footwear for patients with diabetes. Simulation based studies in the past have generally adopted heel pad properties from the literature, in return using heel-specific geometry with material properties of a different heel. In exceptional cases, patient-specific material characterization was performed with simplified two-dimensional models, without further evaluation of a heel-specific response under different loading conditions. The aim of this study was to conduct an inverse finite element analysis of the heel in order to calculate heel-specific material properties in situ. Multidimensional experimental data available from a previous cadaver study by Erdemir et al. ("An Elaborate Data Set Characterizing the Mechanical Response of the Foot," ASME J. Biomech. Eng., 131(9), pp. 094502) was used for model development, optimization, and evaluation of material properties. A specimen-specific three-dimensional finite element representation was developed. Heel pad material properties were determined using inverse finite element analysis by fitting the model behavior to the experimental data. Compression dominant loading, applied using a spherical indenter, was used for optimization of the material properties. The optimized material properties were evaluated through simulations representative of a combined loading scenario (compression and anterior-posterior shear) with a spherical indenter and also of a compression dominant loading applied using an elevated platform. Optimized heel pad material coefficients were 0.001084 MPa (μ), 9.780 (α) (with an effective Poisson's ratio (ν) of 0.475), for a first-order nearly incompressible Ogden material model. The model predicted structural response of the heel pad was in good agreement for both the optimization (<1.05% maximum tool force, 0.9% maximum tool displacement) and validation cases (6.5% maximum tool force, 15% maximum tool displacement). The inverse analysis successfully predicted the material properties for the given specimen-specific heel pad using the experimental data for the specimen. The modeling framework and results can be used for accurate predictions of the three-dimensional interaction of the heel pad with its surroundings.
Engelmann, Brett W
2017-01-01
The Src Homology 2 (SH2) domain family primarily recognizes phosphorylated tyrosine (pY) containing peptide motifs. The relative affinity preferences among competing SH2 domains for phosphopeptide ligands define "specificity space," and underpins many functional pY mediated interactions within signaling networks. The degree of promiscuity exhibited and the dynamic range of affinities supported by individual domains or phosphopeptides is best resolved by a carefully executed and controlled quantitative high-throughput experiment. Here, I describe the fabrication and application of a cellulose-peptide conjugate microarray (CPCMA) platform to the quantitative analysis of SH2 domain specificity space. Included herein are instructions for optimal experimental design with special attention paid to common sources of systematic error, phosphopeptide SPOT synthesis, microarray fabrication, analyte titrations, data capture, and analysis.
[Problem based learning by distance education and analysis of a training system].
Dury, Cécile
2004-12-01
This article presents and analyses a training system aiming at acquiring skills in nursing cares. The aims followed are the development: --of an active pedagogic method: learning through problems (LTP); --of the interdisciplinary and intercultural approach, the same problems being solves by students from different disciplines and cultures; --of the use of the new technologies of information and communication (NTIC) so as to enable a maximal "distance" cooperation between the various partners of the project. The analysis of the system shows that the pedagogic aims followed by LTP are reached. The pluridisciplinary and pluricultural approach, to be optimal, requires great coordination between the partners, balance between the groups of students from different countries and disciplines, training and support from the tutors in the use of the distance teaching platform.
BnmrOffice: A Free Software for β-nmr Data Analysis
NASA Astrophysics Data System (ADS)
Saadaoui, Hassan
A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.
Design and analysis of control system for VCSEL of atomic interference magnetometer
NASA Astrophysics Data System (ADS)
Zhang, Xiao-nan; Sun, Xiao-jie; Kou, Jun; Yang, Feng; Li, Jie; Ren, Zhang; Wei, Zong-kang
2016-11-01
Magnetic field detection is an important means of deep space environment exploration. Benefit from simple structure and low power consumption, atomic interference magnetometer become one of the most potential detector payloads. Vertical Cavity Surface Emitting Laser (VCSEL) is usually used as a light source in atomic interference magnetometer and its frequency stability directly affects the stability and sensitivity of magnetometer. In this paper, closed-loop control strategy of VCSEL was designed and analysis, the controller parameters were selected and the feedback error algorithm was optimized as well. According to the results of experiments that were performed on the hardware-in-the-loop simulation platform, the designed closed-loop control system is reasonable and it is able to effectively improve the laser frequency stability during the actual work of the magnetometer.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-01-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies. PMID:19137113
Multi-wavelength differential absorption measurements of chemical species
NASA Astrophysics Data System (ADS)
Brown, David M.
The probability of accurate detection and quantification of airborne species is enhanced when several optical wavelengths are used to measure the differential absorption of molecular spectral features. Characterization of minor atmospheric constituents, biological hazards, and chemical plumes containing multiple species is difficult when using current approaches because of weak signatures and the use of a limited number of wavelengths used for identification. Current broadband systems such as Differential Optical Absorption Spectroscopy (DOAS) have either limitations for long-range propagation, or require transmitter power levels that are unsafe for operation in urban environments. Passive hyperspectral imaging systems that utilize absorption of solar scatter at visible and infrared wavelengths, or use absorption of background thermal emission, have been employed routinely for detection of airborne chemical species. Passive approaches have operational limitations at various ranges, or under adverse atmospheric conditions because the source intensity and spectrum is often an unknown variable. The work presented here describes a measurement approach that uses a known source of a low transmitted power level for an active system, while retaining the benefits of broadband and extremely long-path absorption operations. An optimized passive imaging system also is described that operates in the 3 to 4 mum window of the mid-infrared. Such active and passive instruments can be configured to optimize the detection of several hydrocarbon gases, as well as many other species of interest. Measurements have provided the incentive to develop algorithms for the calculations of atmospheric species concentrations using multiple wavelengths. These algorithms are used to prepare simulations and make comparisons with experimental results from absorption data of a supercontinuum laser source. The MODTRAN model is used in preparing the simulations, and also in developing additional algorithms to select filters for use with a MWIR (midwave infrared) imager for detection of plumes of methane, propane, gasoline vapor, and diesel vapor. These simulations were prepared for system designs operating on a down-looking airborne platform. A data analysis algorithm for use with a hydrocarbon imaging system extracts regions of interest from the field-of-view for further analysis. An error analysis is presented for a scanning DAS (Differential Absorption Spectroscopy) lidar system operating from an airborne platform that uses signals scattered from topographical targets. The analysis is built into a simulation program for testing real-time data processing approaches, and to gauge the effects on measurements of path column concentration due to ground reflectivity variations. An example simulation provides a description of the data expected for methane. Several accomplishments of this research include: (1) A new lidar technique for detection and measurement of concentrations of atmospheric species is demonstrated that uses a low-power supercontinuum source. (2) A new multi-wavelength algorithm, which demonstrates excellent performance, is applied to processing spectroscopic data collected by a longpath supercontinuum laser absorption instrument. (3) A simulation program for topographical scattering of a scanning DAS system is developed, and it is validated with aircraft data from the ITT Industries ANGEL (Airborne Natural Gas Emission Lidar) 3-lambda lidar system. (4) An error analysis procedure for DAS is developed, and is applied to measurements and simulations for an airborne platform. (5) A method for filter selection is developed and tested for use with an infrared imager that optimizes the detection for various hydrocarbons that absorb in the midwave infrared. (6) The development of a Fourier analysis algorithm is described that allows a user to rapidly separate hydrocarbon plumes from the background features in the field of view of an imaging system.
GeNets: a unified web platform for network-based genomic analyses.
Li, Taibo; Kim, April; Rosenbluh, Joseph; Horn, Heiko; Greenfeld, Liraz; An, David; Zimmer, Andrew; Liberzon, Arthur; Bistline, Jon; Natoli, Ted; Li, Yang; Tsherniak, Aviad; Narayan, Rajiv; Subramanian, Aravind; Liefeld, Ted; Wong, Bang; Thompson, Dawn; Calvo, Sarah; Carr, Steve; Boehm, Jesse; Jaffe, Jake; Mesirov, Jill; Hacohen, Nir; Regev, Aviv; Lage, Kasper
2018-06-18
Functional genomics networks are widely used to identify unexpected pathway relationships in large genomic datasets. However, it is challenging to compare the signal-to-noise ratios of different networks and to identify the optimal network with which to interpret a particular genetic dataset. We present GeNets, a platform in which users can train a machine-learning model (Quack) to carry out these comparisons and execute, store, and share analyses of genetic and RNA-sequencing datasets.
Optimizing Automatic Deployment Using Non-functional Requirement Annotations
NASA Astrophysics Data System (ADS)
Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin
Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.
A translational platform for prototyping closed-loop neuromodulation systems
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2013-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders. PMID:23346048
A translational platform for prototyping closed-loop neuromodulation systems.
Afshar, Pedram; Khambhati, Ankit; Stanslaski, Scott; Carlson, David; Jensen, Randy; Linde, Dave; Dani, Siddharth; Lazarewicz, Maciej; Cong, Peng; Giftakis, Jon; Stypulkowski, Paul; Denison, Tim
2012-01-01
While modulating neural activity through stimulation is an effective treatment for neurological diseases such as Parkinson's disease and essential tremor, an opportunity for improving neuromodulation therapy remains in automatically adjusting therapy to continuously optimize patient outcomes. Practical issues associated with achieving this include the paucity of human data related to disease states, poorly validated estimators of patient state, and unknown dynamic mappings of optimal stimulation parameters based on estimated states. To overcome these challenges, we present an investigational platform including: an implanted sensing and stimulation device to collect data and run automated closed-loop algorithms; an external tool to prototype classifier and control-policy algorithms; and real-time telemetry to update the implanted device firmware and monitor its state. The prototyping system was demonstrated in a chronic large animal model studying hippocampal dynamics. We used the platform to find biomarkers of the observed states and transfer functions of different stimulation amplitudes. Data showed that moderate levels of stimulation suppress hippocampal beta activity, while high levels of stimulation produce seizure-like after-discharge activity. The biomarker and transfer function observations were mapped into classifier and control-policy algorithms, which were downloaded to the implanted device to continuously titrate stimulation amplitude for the desired network effect. The platform is designed to be a flexible prototyping tool and could be used to develop improved mechanistic models and automated closed-loop systems for a variety of neurological disorders.
C-SPECT - a Clinical Cardiac SPECT/Tct Platform: Design Concepts and Performance Potential
Chang, Wei; Ordonez, Caesar E.; Liang, Haoning; Li, Yusheng; Liu, Jingai
2013-01-01
Because of scarcity of photons emitted from the heart, clinical cardiac SPECT imaging is mainly limited by photon statistics. The sub-optimal detection efficiency of current SPECT systems not only limits the quality of clinical cardiac SPECT imaging but also makes more advanced potential applications difficult to be realized. We propose a high-performance system platform - C-SPECT, which has its sampling geometry optimized for detection of emitted photons in quality and quantity. The C-SPECT has a stationary C-shaped gantry that surrounds the left-front side of a patient’s thorax. The stationary C-shaped collimator and detector systems in the gantry provide effective and efficient detection and sampling of photon emission. For cardiac imaging, the C-SPECT platform could achieve 2 to 4 times the system geometric efficiency of conventional SPECT systems at the same sampling resolution. This platform also includes an integrated transmission CT for attenuation correction. The ability of C-SPECT systems to perform sequential high-quality emission and transmission imaging could bring cost-effective high-performance to clinical imaging. In addition, a C-SPECT system could provide high detection efficiency to accommodate fast acquisition rate for gated and dynamic cardiac imaging. This paper describes the design concepts and performance potential of C-SPECT, and illustrates how these concepts can be implemented in a basic system. PMID:23885129
TNA4OptFlux – a software tool for the analysis of strain optimization strategies
2013-01-01
Background Rational approaches for Metabolic Engineering (ME) deal with the identification of modifications that improve the microbes’ production capabilities of target compounds. One of the major challenges created by strain optimization algorithms used in these ME problems is the interpretation of the changes that lead to a given overproduction. Often, a single gene knockout induces changes in the fluxes of several reactions, as compared with the wild-type, and it is therefore difficult to evaluate the physiological differences of the in silico mutant. This is aggravated by the fact that genome-scale models per se are difficult to visualize, given the high number of reactions and metabolites involved. Findings We introduce a software tool, the Topological Network Analysis for OptFlux (TNA4OptFlux), a plug-in which adds to the open-source ME platform OptFlux the capability of creating and performing topological analysis over metabolic networks. One of the tool’s major advantages is the possibility of using these tools in the analysis and comparison of simulated phenotypes, namely those coming from the results of strain optimization algorithms. We illustrate the capabilities of the tool by using it to aid the interpretation of two E. coli strains designed in OptFlux for the overproduction of succinate and glycine. Conclusions Besides adding new functionalities to the OptFlux software tool regarding topological analysis, TNA4OptFlux methods greatly facilitate the interpretation of non-intuitive ME strategies by automating the comparison between perturbed and non-perturbed metabolic networks. The plug-in is available on the web site http://www.optflux.org, together with extensive documentation. PMID:23641878
Low-cost bioanalysis on paper-based and its hybrid microfluidic platforms.
Dou, Maowei; Sanjay, Sharma Timilsina; Benhabib, Merwan; Xu, Feng; Li, XiuJun
2015-12-01
Low-cost assays have broad applications ranging from human health diagnostics and food safety inspection to environmental analysis. Hence, low-cost assays are especially attractive for rural areas and developing countries, where financial resources are limited. Recently, paper-based microfluidic devices have emerged as a low-cost platform which greatly accelerates the point of care (POC) analysis in low-resource settings. This paper reviews recent advances of low-cost bioanalysis on paper-based microfluidic platforms, including fully paper-based and paper hybrid microfluidic platforms. In this review paper, we first summarized the fabrication techniques of fully paper-based microfluidic platforms, followed with their applications in human health diagnostics and food safety analysis. Then we highlighted paper hybrid microfluidic platforms and their applications, because hybrid platforms could draw benefits from multiple device substrates. Finally, we discussed the current limitations and perspective trends of paper-based microfluidic platforms for low-cost assays. Copyright © 2015 Elsevier B.V. All rights reserved.
Küppers, Tobias; Steffen, Victoria; Hellmuth, Hendrik; O'Connell, Timothy; Bongaerts, Johannes; Maurer, Karl-Heinz; Wiechert, Wolfgang
2014-03-24
Since volatile and rising cost factors such as energy, raw materials and market competitiveness have a significant impact on the economic efficiency of biotechnological bulk productions, industrial processes need to be steadily improved and optimized. Thereby the current production hosts can undergo various limitations. To overcome those limitations and in addition increase the diversity of available production hosts for future applications, we suggest a Production Strain Blueprinting (PSB) strategy to develop new production systems in a reduced time lapse in contrast to a development from scratch.To demonstrate this approach, Bacillus pumilus has been developed as an alternative expression platform for the production of alkaline enzymes in reference to the established industrial production host Bacillus licheniformis. To develop the selected B. pumilus as an alternative production host the suggested PSB strategy was applied proceeding in the following steps (dedicated product titers are scaled to the protease titer of Henkel's industrial production strain B. licheniformis at lab scale): Introduction of a protease production plasmid, adaptation of a protease production process (44%), process optimization (92%) and expression optimization (114%). To further evaluate the production capability of the developed B. pumilus platform, the target protease was substituted by an α-amylase. The expression performance was tested under the previously optimized protease process conditions and under subsequently adapted process conditions resulting in a maximum product titer of 65% in reference to B. licheniformis protease titer. In this contribution the applied PSB strategy performed very well for the development of B. pumilus as an alternative production strain. Thereby the engineered B. pumilus expression platform even exceeded the protease titer of the industrial production host B. licheniformis by 14%. This result exhibits a remarkable potential of B. pumilus to be the basis for a next generation production host, since the strain has still a large potential for further genetic engineering. The final amylase titer of 65% in reference to B. licheniformis protease titer suggests that the developed B. pumilus expression platform is also suitable for an efficient production of non-proteolytic enzymes reaching a final titer of several grams per liter without complex process modifications.
Optimizing Flight Control Software With an Application Platform
NASA Technical Reports Server (NTRS)
Smith, Irene Skupniewicz; Shi, Nija; Webster, Christopher
2012-01-01
Flight controllers in NASA s mission control centers work day and night to ensure that missions succeed and crews are safe. The IT goals of NASA mission control centers are similar to those of most businesses: to evolve IT infrastructure from basic to dynamic. This paper describes Mission Control Technologies (MCT), an application platform that is powering mission control today and is designed to meet the needs of future NASA control centers. MCT is an extensible platform that provides GUI components and a runtime environment. The platform enables NASA s IT goals through its use of lightweight interfaces and configurable components, which promote standardization and incorporate useful solution patterns. The MCT architecture positions mission control centers to reach the goal of dynamic IT, leading to lower cost of ownership, and treating software as a strategic investment.
Low-loss compact multilayer silicon nitride platform for 3D photonic integrated circuits.
Shang, Kuanping; Pathak, Shibnath; Guan, Binbin; Liu, Guangyao; Yoo, S J B
2015-08-10
We design, fabricate, and demonstrate a silicon nitride (Si(3)N(4)) multilayer platform optimized for low-loss and compact multilayer photonic integrated circuits. The designed platform, with 200 nm thick waveguide core and 700 nm interlayer gap, is compatible for active thermal tuning and applicable to realizing compact photonic devices such as arrayed waveguide gratings (AWGs). We achieve ultra-low loss vertical couplers with 0.01 dB coupling loss, multilayer crossing loss of 0.167 dB at 90° crossing angle, 50 μm bending radius, 100 × 2 μm(2) footprint, lateral misalignment tolerance up to 400 nm, and less than -52 dB interlayer crosstalk at 1550 nm wavelength. Based on the designed platform, we demonstrate a 27 × 32 × 2 multilayer star coupler.
Myzithras, Maria; Li, Hua; Bigwarfe, Tammy; Waltz, Erica; Gupta, Priyanka; Low, Sarah; Hayes, David B; MacDonnell, Scott; Ahlberg, Jennifer; Franti, Michael; Roberts, Simon
2016-03-01
Four bioanalytical platforms were evaluated to optimize sensitivity and enable detection of recombinant human GDF11 in biological matrices; ELISA, Meso Scale Discovery, Gyrolab xP Workstation and Simoa HD-1. Results & methodology: After completion of custom assay development, the single-molecule ELISA (Simoa) achieved the greatest sensitivity with a lower limit of quantitation of 0.1 ng/ml, an improvement of 100-fold over the next sensitive platform (MSD). This improvement was essential to enable detection of GDF11 in biological samples, and without the technology the sensitivity achieved on the other platforms would not have been sufficient. Other factors such as ease of use, cost, assay time and automation capability can also be considered when developing custom immunoassays, based on the requirements of the bioanalyst.
Development of a Web-Enabled Informatics Platform for Manipulation of Gene Expression Data
2004-12-01
genomic platforms such as metabolomics and proteomics , and to federated databases for knowledge management. A successful SBIR Phase I completed...measurements that require sophisticated bioinformatic platforms for data archival, management, integration, and analysis if researchers are to derive...web-enabled bioinformatic platform consisting of a Laboratory Information Management System (LIMS), an Analysis Information Management System (AIMS
Fang, Xiang; Li, Ning-qiu; Fu, Xiao-zhe; Li, Kai-bin; Lin, Qiang; Liu, Li-hui; Shi, Cun-bin; Wu, Shu-qin
2015-07-01
As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.
A novel real time imaging platform to quantify macrophage phagocytosis.
Kapellos, Theodore S; Taylor, Lewis; Lee, Heyne; Cowley, Sally A; James, William S; Iqbal, Asif J; Greaves, David R
2016-09-15
Phagocytosis of pathogens, apoptotic cells and debris is a key feature of macrophage function in host defense and tissue homeostasis. Quantification of macrophage phagocytosis in vitro has traditionally been technically challenging. Here we report the optimization and validation of the IncuCyte ZOOM® real time imaging platform for macrophage phagocytosis based on pHrodo® pathogen bioparticles, which only fluoresce when localized in the acidic environment of the phagolysosome. Image analysis and fluorescence quantification were performed with the automated IncuCyte™ Basic Software. Titration of the bioparticle number showed that the system is more sensitive than a spectrofluorometer, as it can detect phagocytosis when using 20× less E. coli bioparticles. We exemplified the power of this real time imaging platform by studying phagocytosis of murine alveolar, bone marrow and peritoneal macrophages. We further demonstrate the ability of this platform to study modulation of the phagocytic process, as pharmacological inhibitors of phagocytosis suppressed bioparticle uptake in a concentration-dependent manner, whereas opsonins augmented phagocytosis. We also investigated the effects of macrophage polarization on E. coli phagocytosis. Bone marrow-derived macrophage (BMDM) priming with M2 stimuli, such as IL-4 and IL-10 resulted in higher engulfment of bioparticles in comparison with M1 polarization. Moreover, we demonstrated that tolerization of BMDMs with lipopolysaccharide (LPS) results in impaired E. coli bioparticle phagocytosis. This novel real time assay will enable researchers to quantify macrophage phagocytosis with a higher degree of accuracy and sensitivity and will allow investigation of limited populations of primary phagocytes in vitro. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Bacterial fermentation platform for producing artificial aromatic amines
Masuo, Shunsuke; Zhou, Shengmin; Kaneko, Tatsuo; Takaya, Naoki
2016-01-01
Aromatic amines containing an aminobenzene or an aniline moiety comprise versatile natural and artificial compounds including bioactive molecules and resources for advanced materials. However, a bio-production platform has not been implemented. Here we constructed a bacterial platform for para-substituted aminobenzene relatives of aromatic amines via enzymes in an alternate shikimate pathway predicted in a Pseudomonad bacterium. Optimization of the metabolic pathway in Escherichia coli cells converted biomass glucose to 4-aminophenylalanine with high efficiency (4.4 g L−1 in fed-batch cultivation). We designed and produced artificial pathways that mimicked the fungal Ehrlich pathway in E. coli and converted 4-aminophenylalanine into 4-aminophenylethanol and 4-aminophenylacetate at 90% molar yields. Combining these conversion systems or fungal phenylalanine decarboxylases, the 4-aminophenylalanine-producing platform fermented glucose to 4-aminophenylethanol, 4-aminophenylacetate, and 4-phenylethylamine. This original bacterial platform for producing artificial aromatic amines highlights their potential as heteroatoms containing bio-based materials that can replace those derived from petroleum. PMID:27167511
LXtoo: an integrated live Linux distribution for the bioinformatics community
2012-01-01
Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356
LXtoo: an integrated live Linux distribution for the bioinformatics community.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
2012-07-19
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Bidard, Frédérique; Imbeaud, Sandrine; Reymond, Nancie; Lespinet, Olivier; Silar, Philippe; Clavé, Corinne; Delacroix, Hervé; Berteaux-Lecellier, Véronique; Debuchy, Robert
2010-06-18
The development of new microarray technologies makes custom long oligonucleotide arrays affordable for many experimental applications, notably gene expression analyses. Reliable results depend on probe design quality and selection. Probe design strategy should cope with the limited accuracy of de novo gene prediction programs, and annotation up-dating. We present a novel in silico procedure which addresses these issues and includes experimental screening, as an empirical approach is the best strategy to identify optimal probes in the in silico outcome. We used four criteria for in silico probe selection: cross-hybridization, hairpin stability, probe location relative to coding sequence end and intron position. This latter criterion is critical when exon-intron gene structure predictions for intron-rich genes are inaccurate. For each coding sequence (CDS), we selected a sub-set of four probes. These probes were included in a test microarray, which was used to evaluate the hybridization behavior of each probe. The best probe for each CDS was selected according to three experimental criteria: signal-to-noise ratio, signal reproducibility, and representative signal intensities. This procedure was applied for the development of a gene expression Agilent platform for the filamentous fungus Podospora anserina and the selection of a single 60-mer probe for each of the 10,556 P. anserina CDS. A reliable gene expression microarray version based on the Agilent 44K platform was developed with four spot replicates of each probe to increase statistical significance of analysis.
Analysis of long term trends of precipitation estimates acquired using radar network in Turkey
NASA Astrophysics Data System (ADS)
Tugrul Yilmaz, M.; Yucel, Ismail; Kamil Yilmaz, Koray
2016-04-01
Precipitation estimates, a vital input in many hydrological and agricultural studies, can be obtained using many different platforms (ground station-, radar-, model-, satellite-based). Satellite- and model-based estimates are spatially continuous datasets, however they lack the high resolution information many applications often require. Station-based values are actual precipitation observations, however they suffer from their nature that they are point data. These datasets may be interpolated however such end-products may have large errors over remote locations with different climate/topography/etc than the areas stations are installed. Radars have the particular advantage of having high spatial resolution information over land even though accuracy of radar-based precipitation estimates depends on the Z-R relationship, mountain blockage, target distance from the radar, spurious echoes resulting from anomalous propagation of the radar beam, bright band contamination and ground clutter. A viable method to obtain spatially and temporally high resolution consistent precipitation information is merging radar and station data to take advantage of each retrieval platform. An optimally merged product is particularly important in Turkey where complex topography exerts strong controls on the precipitation regime and in turn hampers observation efforts. There are currently 10 (additional 7 are planned) weather radars over Turkey obtaining precipitation information since 2007. This study aims to optimally merge radar precipitation data with station based observations to introduce a station-radar blended precipitation product. This study was supported by TUBITAK fund # 114Y676.
Time response analysis in suspension system design of a high-speed car
NASA Astrophysics Data System (ADS)
Pagwiwoko, Cosmas Pandit
2010-03-01
A land speed record vehicle is designed to run on a flat surface like salt lake where the wheels are normally made from solid metal with a special suspension system. The suspension is designed to provide a stable platform to keep the wheel treads on tract, to insulate the car and the driver from the surface irregularities and to take part of good handling properties. The surface condition of the lake beds is basically flat without undulations but with inconsistent surface textures and ridges. Spring with nonlinear rate is used with the reason that the resistance builds up roughly proportional to the aerodynamic download for keeping the height more nearly constant. The objective of the work is to produce an efficient method for assisting the design of suspension system. At the initial step, the stiffness and the damping constants are determined based on RMS optimization by following the optimization strategy i.e. to minimize the absolute acceleration respect to the relative displacement of the suspension. Power bond graph technique is then used to model the nonlinearity of the components i.e. spring and dashpot of the suspension system. This technique also enables to incorporate the interactions of dynamic response of the vehicle's body with aerodynamic flow as a result of the base excitation of the ground to the wheels. The simulation is conducted on the platform of Simulink-MATLAB and the interactions amongst the components within the system are observed in time domain to evaluate the effectiveness of the suspension.
Atomdroid: a computational chemistry tool for mobile platforms.
Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M
2012-04-23
We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.
Development and Validation of Sandwich ELISA Microarrays with Minimal Assay Interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, Rachel M.; Servoss, Shannon; Crowley, Sheila A.
Sandwich enzyme-linked immunosorbent assay (ELISA) microarrays are emerging as a strong candidate platform for multiplex biomarker analysis because of the ELISA’s ability to quantitatively measure rare proteins in complex biological fluids. Advantages of this platform are high-throughput potential, assay sensitivity and stringency, and the similarity to the standard ELISA test, which facilitates assay transfer from a research setting to a clinical laboratory. However, a major concern with the multiplexing of ELISAs is maintaining high assay specificity. In this study, we systematically determine the amount of assay interference and noise contributed by individual components of the multiplexed 24-assay system. We findmore » that non-specific reagent cross-reactivity problems are relatively rare. We did identify the presence of contaminant antigens in a “purified antigen”. We tested the validated ELISA microarray chip using paired serum samples that had been collected from four women at a 6-month interval. This analysis demonstrated that protein levels typically vary much more between individuals then within an individual over time, a result which suggests that longitudinal studies may be useful in controlling for biomarker variability across a population. Overall, this research demonstrates the importance of a stringent screening protocol and the value of optimizing the antibody and antigen concentrations when designing chips for ELISA microarrays.« less
Telemanipulator design and optimization software
NASA Astrophysics Data System (ADS)
Cote, Jean; Pelletier, Michel
1995-12-01
For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.
TrackMate: An open and extensible platform for single-particle tracking.
Tinevez, Jean-Yves; Perry, Nick; Schindelin, Johannes; Hoopes, Genevieve M; Reynolds, Gregory D; Laplantine, Emmanuel; Bednarek, Sebastian Y; Shorte, Spencer L; Eliceiri, Kevin W
2017-02-15
We present TrackMate, an open source Fiji plugin for the automated, semi-automated, and manual tracking of single-particles. It offers a versatile and modular solution that works out of the box for end users, through a simple and intuitive user interface. It is also easily scriptable and adaptable, operating equally well on 1D over time, 2D over time, 3D over time, or other single and multi-channel image variants. TrackMate provides several visualization and analysis tools that aid in assessing the relevance of results. The utility of TrackMate is further enhanced through its ability to be readily customized to meet specific tracking problems. TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment. This evolving framework provides researchers with the opportunity to quickly develop and optimize new algorithms based on existing TrackMate modules without the need of having to write de novo user interfaces, including visualization, analysis and exporting tools. The current capabilities of TrackMate are presented in the context of three different biological problems. First, we perform Caenorhabditis-elegans lineage analysis to assess how light-induced damage during imaging impairs its early development. Our TrackMate-based lineage analysis indicates the lack of a cell-specific light-sensitive mechanism. Second, we investigate the recruitment of NEMO (NF-κB essential modulator) clusters in fibroblasts after stimulation by the cytokine IL-1 and show that photodamage can generate artifacts in the shape of TrackMate characterized movements that confuse motility analysis. Finally, we validate the use of TrackMate for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Eeckhout, Eric; Berger, Alexandre; Roguelov, Christan; Lyon, Xavier; Imsand, Christophe; Fivaz-Arbane, Malika; Girod, Grégoire; De Benedetti, Edoardo
2003-08-01
IVUS is considered as the most accurate tool for the assessment of optimal stent deployment. Direct stenting has shown to be a safe, efficient, and resource-saving procedure in selected patients. In a prospective 1-month feasibility trial, a new combined IVUS-coronary stent delivery platform (Josonics Flex, Jomed, Helsingborn, Sweden) was evaluated during direct stenting in consecutive patients considered eligible for direct stenting. The feasibility endpoint was successful stent deployment without any clinical adverse event, while the efficacy endpoint was strategic adaptation according to standard IVUS criteria for optimal stent deployment at the intermediate phase (after a result considered angiographically optimal) and at the end of the intervention (after optimization according to IVUS standards). A total of 16 patients were successfully treated with this device without any major clinical complication. At the intermediate phase, optimal stent deployment was achieved in four patients only, while at the end only one patient had nonoptimal IVUS stent deployment. In particular, the minimal in-stent cross-section area increased from 6.3 +/- 1.2 to 8.3 +/- 2.5 mm(2). These preliminary data demonstrate the feasibility of direct stenting with a combined IVUS-stent catheter in selected patients and confirm the results from larger randomized trials on the impact of IVUS on strategic adaptations during coronary stent placement. Copyright 2003 Wiley-Liss, Inc.
Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.
2017-01-01
Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945
Nanochannel Electroporation as a Platform for Living Cell Interrogation in Acute Myeloid Leukemia.
Zhao, Xi; Huang, Xiaomeng; Wang, Xinmei; Wu, Yun; Eisfeld, Ann-Kathrin; Schwind, Sebastian; Gallego-Perez, Daniel; Boukany, Pouyan E; Marcucci, Guido I; Lee, Ly James
2015-12-01
A living cell interrogation platform based on nanochannel electroporation is demonstrated with analysis of RNAs in single cells. This minimally invasive process is based on individual cells and allows both multi-target analysis and stimulus-response analysis by sequential deliveries. The unique platform possesses a great potential to the comprehensive and lysis-free nucleic acid analysis on rare or hard-to-transfect cells.
Zhu, Chenggang; Zhu, Xiangdong; Landry, James P; Cui, Zhaomeng; Li, Quanfu; Dang, Yongjun; Mi, Lan; Zheng, Fengyun; Fei, Yiyan
2016-03-16
Small-molecule microarray (SMM) is an effective platform for identifying lead compounds from large collections of small molecules in drug discovery, and efficient immobilization of molecular compounds is a pre-requisite for the success of such a platform. On an isocyanate functionalized surface, we studied the dependence of immobilization efficiency on chemical residues on molecular compounds, terminal residues on isocyanate functionalized surface, lengths of spacer molecules, and post-printing treatment conditions, and we identified a set of optimized conditions that enable us to immobilize small molecules with significantly improved efficiencies, particularly for those molecules with carboxylic acid residues that are known to have low isocyanate reactivity. We fabricated microarrays of 3375 bioactive compounds on isocyanate functionalized glass slides under these optimized conditions and confirmed that immobilization percentage is over 73%.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; George, Thomas; Tarbell, Mark A.
2007-04-01
Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.
ESTEST: An Open Science Platform for Electronic Structure Research
ERIC Educational Resources Information Center
Yuan, Gary
2012-01-01
Open science platforms in support of data generation, analysis, and dissemination are becoming indispensible tools for conducting research. These platforms use informatics and information technologies to address significant problems in open science data interoperability, verification & validation, comparison, analysis, post-processing,…
The evolving potential of companion diagnostics.
Khoury, Joseph D
2016-01-01
The scope of companion diagnostics in cancer has undergone significant shifts in the past few years, with increased development of targeted therapies and novel testing platforms. This has provided new opportunities to effect unprecedented paradigm shifts in the application of personalized medicine principles for patients with cancer. These shifts involve assay platforms, analytes, regulations, and therapeutic approaches. As opportunities involving each of these facets of companion diagnostics expand, close collaborations between key stakeholders should be enhanced to ensure optimal performance characteristics and patient outcomes.
Traffic Patrol Service Platform Scheduling and Containment Optimization Strategy
NASA Astrophysics Data System (ADS)
Wang, Tiane; Niu, Taiyang; Wan, Baocheng; Li, Jian
This article is based on the traffic and patrol police service platform settings and scheduling, in order to achieve the main purpose of rapid containment for the suspect in an emergency event. Proposing new boundary definition based on graph theory, using 0-1 programming, Dijkstra algorithm, the shortest path tree (SPT) and some of the related knowledge establish a containment model. Finally, making a combination with a city-specific data and using this model obtain the best containment plan.
Study of thermal management for space platform applications
NASA Technical Reports Server (NTRS)
Oren, J. A.
1980-01-01
Techniques for the management of the thermal energy of large space platforms using many hundreds of kilowatts over a 10 year life span were evaluated. Concepts for heat rejection, heat transport within the vehicle, and interfacing were analyzed and compared. The heat rejection systems were parametrically weight optimized over conditions for heat pipe and pumped fluid approaches. Two approaches to achieve reliability were compared for: performance, weight, volume, projected area, reliability, cost, and operational characteristics. Technology needs are assessed and technology advancement recommendations are made.
Pérez-García, Fernando; Vasco-Cárdenas, María F; Barreiro, Carlos
2016-09-02
Production enhancement of industrial microbial products or strains has been traditionally tackled by mutagenesis with chemical methods, irradiation or genetic manipulation. However, the final yield increase must go hand in hand with the resistance increasing against the usual inherent toxicity of the final products. Few studies have been carried out on resistance improvement and even fewer on the initial selection of naturally-generated biotypes, which could decrease the artificial mutagenesis. This fact is vital in the case of GRAS microorganisms as Corynebacterium glutamicum involved in food, feed and cosmetics production. The characteristic wide diversity and plasticity in terms of their genetic material of Actinobacteria eases the biotypes generation. Thus, differences in morphology, glutamate and lysine production and growth in media supplemented with dicarboxylic acids were analysed in four biotypes of C. glutamicum ATCC 13032. A 2D-DIGE analysis of these biotypes growing with itaconic acid allowed us to define their differences. Thus, an optimized central metabolism and better protection against the generated stress conditions present the CgL biotype as a suitable platform for production of itaconic acid, which is used as a building block (e.g.: acrylic plastic). This analysis highlights the preliminary biotypes screening as a way to reach optimal industrial productions.
Page, Tessa; Nguyen, Huong Thi Huynh; Hilts, Lindsey; Ramos, Lorena; Hanrahan, Grady
2012-06-01
This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
PERI - Auto-tuning Memory Intensive Kernels for Multicore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H; Williams, Samuel; Datta, Kaushik
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less
NASA Technical Reports Server (NTRS)
Calhoun, Phillip C.; Hampton, R. David; Whorton, Mark S.
2001-01-01
The acceleration environment on the International Space Station (ISS) will likely exceed the requirements of many micro-gravity experiments. The Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) is being built by the NASA Marshall Space Flight Center to attenuate the nominal acceleration environment and provide some isolation for micro-gravity science experiments. G-LIMIT uses Lorentz (voice-coil) magnetic actuators to isolate a platform for mounting science payloads from the nominal acceleration environment. The system utilizes payload acceleration, relative position, and relative orientation measurements in a feedback controller to accomplish the vibration isolation task. The controller provides current command to six magnetic actuators, producing the required experiment isolation from the ISS acceleration environment. This paper presents the development of a candidate control law to meet the acceleration attenuation requirements for the g-LIMIT experiment platform. The controller design is developed using linear optimal control techniques for both frequency-weighted H(sub 2) and H(sub infinity) norms. Comparison of the performance and robustness to plant uncertainty for these two optimal control design approaches are included in the discussion.
Development of deployable structures for large space platform systems, volume 1
NASA Technical Reports Server (NTRS)
1982-01-01
Generic deployable spacecraft configurations and deployable platform systems concepts were identified. Sizing, building block concepts, orbiter packaging, thermal analysis, cost analysis, and mass properties analysis as related to platform systems integration are considered. Technology needs are examined and the major criteria used in concept selection are delineated. Requirements for deployable habitat modules, tunnels, and OTV hangars are considered.
Thiolene and SIFEL-based Microfluidic Platforms for Liquid-Liquid Extraction
Goyal, Sachit; Desai, Amit V.; Lewis, Robert W.; Ranganathan, David R.; Li, Hairong; Zeng, Dexing; Reichert, David E.; Kenis, Paul J.A.
2014-01-01
Microfluidic platforms provide several advantages for liquid-liquid extraction (LLE) processes over conventional methods, for example with respect to lower consumption of solvents and enhanced extraction efficiencies due to the inherent shorter diffusional distances. Here, we report the development of polymer-based parallel-flow microfluidic platforms for LLE. To date, parallel-flow microfluidic platforms have predominantly been made out of silicon or glass due to their compatibility with most organic solvents used for LLE. Fabrication of silicon and glass-based LLE platforms typically requires extensive use of photolithography, plasma or laser-based etching, high temperature (anodic) bonding, and/or wet etching with KOH or HF solutions. In contrast, polymeric microfluidic platforms can be fabricated using less involved processes, typically photolithography in combination with replica molding, hot embossing, and/or bonding at much lower temperatures. Here we report the fabrication and testing of microfluidic LLE platforms comprised of thiolene or a perfluoropolyether-based material, SIFEL, where the choice of materials was mainly guided by the need for solvent compatibility and fabrication amenability. Suitable designs for polymer-based LLE platforms that maximize extraction efficiencies within the constraints of the fabrication methods and feasible operational conditions were obtained using analytical modeling. To optimize the performance of the polymer-based LLE platforms, we systematically studied the effect of surface functionalization and of microstructures on the stability of the liquid-liquid interface and on the ability to separate the phases. As demonstrative examples, we report (i) a thiolene-based platform to determine the lipophilicity of caffeine, and (ii) a SIFEL-based platform to extract radioactive copper from an acidic aqueous solution. PMID:25246730
Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases
Qutaish, Mohammed Q.; Zhou, Zhuxian; Prabhu, David; Liu, Yiqiao; Busso, Mallory R.; Izadnegahdar, Donna; Gargesha, Madhusudhana; Lu, Hong; Lu, Zheng-Rong
2018-01-01
We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy. PMID:29805438
Application of seismic interpretation in the development of Jerneh Field, Malay Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusoff, Z.
1994-07-01
Development of the Jerneh gas field has been significantly aided by the use of 3-D and site survey seismic interpretations. The two aspects that have been of particular importance are identification of sea-floor and near-surface safety hazards for safe platform installation/development drilling and mapping of reservoirs/hydrocarbons within gas-productive sands of the Miocene groups B, D, and E. Choice of platform location as well as casing design require detailed analysis of sea-floor and near-surface safety hazards. At Jerneh, sea-floor pockmarks near-surface high amplitudes, distributary channels, and minor faults were recognized as potential operational safety hazards. The integration of conventional 3-D andmore » site survey seismic data enabled comprehensive understanding of the occurrence and distribution of potential hazards to platform installation and development well drilling. Three-dimensional seismic interpretation has been instrumental not only in the field structural definition but also in recognition of reservoir trends and hydrocarbon distribution. Additional gas reservoirs were identified by their DHI characteristics and subsequently confirmed by development wells. The innovative use of seismic attribute mapping techniques has been very important in defining both fluid and reservoir distribution in groups B and D. Integration of 3-D seismic data and well-log interpretations has helped in optimal field development, including the planning of well locations and drilling sequence.« less
BrailleEasy: One-handed Braille Keyboard for Smartphones.
Šepić, Barbara; Ghanem, Abdurrahman; Vogel, Stephan
2015-01-01
The evolution of mobile technology is moving at a very fast pace. Smartphones are currently considered a primary communication platform where people exchange voice calls, text messages and emails. The human-smartphone interaction, however, is generally optimized for sighted people through the use of visual cues on the touchscreen, e.g., typing text by tapping on a visual keyboard. Unfortunately, this interaction scheme renders smartphone technology largely inaccessible to visually impaired people as it results in slow typing and higher error rates. Apple and some third party applications provide solutions specific to blind people which enables them to use Braille on smartphones. These applications usually require both hands for typing. However, Brailling with both hands while holding the phone is not very comfortable. Furthermore, two-handed Brailling is not possible on smartwatches, which will be used more pervasively in the future. Therefore, we develop a platform for one-handed Brailing consisting of a custom keyboard called BrailleEasy to input Arabic or English Braille codes within any application, and a BrailleTutor application for practicing. Our platform currently supports Braille grade 1, and will be extended to support contractions, spelling correction, and more languages. Preliminary analysis of user studies for blind participants showed that after less than two hours of practice, participants were able to type significantly faster with the BrailleEasy keyboard than with the standard QWERTY keyboard.
The EarthServer Federation: State, Role, and Contribution to GEOSS
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Baumann, Peter
2016-04-01
The intercontinental EarthServer initiative has established a European datacube platform with proven scalability: known databases exceed 100 TB, and single queries have been split across more than 1,000 cloud nodes. Its service interface being rigorously based on the OGC "Big Geo Data" standards, Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS), a series of clients can dock into the services, ranging from open-source OpenLayers and QGIS over open-source NASA WorldWind to proprietary ESRI ArcGIS. Datacube fusion in a "mix and match" style is supported by the platform technolgy, the rasdaman Array Database System, which transparently federates queries so that users simply approach any node of the federation to access any data item, internally optimized for minimal data transfer. Notably, rasdaman is part of GEOSS GCI. NASA is contributing its Web WorldWind virtual globe for user-friendly data extraction, navigation, and analysis. Integrated datacube / metadata queries are contributed by CITE. Current federation members include ESA (managed by MEEO sr.l.), Plymouth Marine Laboratory (PML), the European Centre for Medium-Range Weather Forecast (ECMWF), Australia's National Computational Infrastructure, and Jacobs University (adding in Planetary Science). Further data centers have expressed interest in joining. We present the EarthServer approach, discuss its underlying technology, and illustrate the contribution this datacube platform can make to GEOSS.
Eguílaz, Marcos; Villalonga, Reynaldo; Yáñez-Sedeño, Paloma; Pingarrón, José M
2011-10-15
The design of a novel biosensing electrode surface, combining the advantages of magnetic ferrite nanoparticles (MNPs) functionalized with glutaraldehyde (GA) and poly(diallyldimethylammonium chloride) (PDDA)-coated multiwalled carbon nanotubes (MWCNTs) as platforms for the construction of high-performance multienzyme biosensors, is reported in this work. Before the immobilization of enzymes, GA-MNP/PDDA/MWCNT composites were prepared by wrapping of carboxylated MWCNTs with positively charged PDDA and interaction with GA-functionalized MNPs. The nanoconjugates were characterized by scanning electron microscopy (SEM) and electrochemistry. The electrode platform was used to construct a bienzyme biosensor for the determination of cholesterol, which implied coimmobilization of cholesterol oxidase (ChOx) and peroxidase (HRP) and the use of hydroquinone as redox mediator. Optimization of all variables involved in the preparation and analytical performance of the bienzyme electrode was accomplished. At an applied potential of -0.05 V, a linear calibration graph for cholesterol was obtained in the 0.01-0.95 mM concentration range. The detection limit (0.85 μM), the apparent Michaelis-Menten constant (1.57 mM), the stability of the biosensor, and the calculated activation energy can be advantageously compared with the analytical characteristics of other CNT-based cholesterol biosensors reported in the literature. Analysis of human serum spiked with cholesterol at different concentration levels yielded recoveries between 100% and 103% © 2011 American Chemical Society
A Noninvasive Platform for Imaging and Quantifying Oil Storage in Submillimeter Tobacco Seed1[W][OA
Fuchs, Johannes; Neuberger, Thomas; Rolletschek, Hardy; Schiebold, Silke; Nguyen, Thuy Ha; Borisjuk, Nikolai; Börner, Andreas; Melkus, Gerd; Jakob, Peter; Borisjuk, Ljudmilla
2013-01-01
While often thought of as a smoking drug, tobacco (Nicotiana spp.) is now considered as a plant of choice for molecular farming and biofuel production. Here, we describe a noninvasive means of deriving both the distribution of lipid and the microtopology of the submillimeter tobacco seed, founded on nuclear magnetic resonance (NMR) technology. Our platform enables counting of seeds inside the intact tobacco capsule to measure seed sizes, to model the seed interior in three dimensions, to quantify the lipid content, and to visualize lipid gradients. Hundreds of seeds can be simultaneously imaged at an isotropic resolution of 25 µm, sufficient to assess each individual seed. The relative contributions of the embryo and the endosperm to both seed size and total lipid content could be assessed. The extension of the platform to a range of wild and cultivated Nicotiana species demonstrated certain evolutionary trends in both seed topology and pattern of lipid storage. The NMR analysis of transgenic tobacco plants with seed-specific ectopic expression of the plastidial phosphoenolpyruvate/phosphate translocator, displayed a trade off between seed size and oil concentration. The NMR-based assay of seed lipid content and topology has a number of potential applications, in particular providing a means to test and optimize transgenic strategies aimed at the manipulation of seed size, seed number, and lipid content in tobacco and other species with submillimeter seeds. PMID:23232144
Pyrgiotakis, Georgios; Vedantam, Pallavi; Cirenza, Caroline; McDevitt, James; Eleftheriadou, Mary; Leonard, Stephen S.; Demokritou, Philip
2016-01-01
A chemical free, nanotechnology-based, antimicrobial platform using Engineered Water Nanostructures (EWNS) was recently developed. EWNS have high surface charge, are loaded with reactive oxygen species (ROS), and can interact-with, and inactivate an array of microorganisms, including foodborne pathogens. Here, it was demonstrated that their properties during synthesis can be fine tuned and optimized to further enhance their antimicrobial potential. A lab based EWNS platform was developed to enable fine-tuning of EWNS properties by modifying synthesis parameters. Characterization of EWNS properties (charge, size and ROS content) was performed using state-of-the art analytical methods. Further their microbial inactivation potential was evaluated with food related microorganisms such as Escherichia coli, Salmonella enterica, Listeria innocua, Mycobacterium parafortuitum, and Saccharomyces cerevisiae inoculated onto the surface of organic grape tomatoes. The results presented here indicate that EWNS properties can be fine-tuned during synthesis resulting in a multifold increase of the inactivation efficacy. More specifically, the surface charge quadrupled and the ROS content increased. Microbial removal rates were microorganism dependent and ranged between 1.0 to 3.8 logs after 45 mins of exposure to an EWNS aerosol dose of 40,000 #/cm3. PMID:26875817
NASA Astrophysics Data System (ADS)
Pyrgiotakis, Georgios; Vedantam, Pallavi; Cirenza, Caroline; McDevitt, James; Eleftheriadou, Mary; Leonard, Stephen S.; Demokritou, Philip
2016-02-01
A chemical free, nanotechnology-based, antimicrobial platform using Engineered Water Nanostructures (EWNS) was recently developed. EWNS have high surface charge, are loaded with reactive oxygen species (ROS), and can interact-with, and inactivate an array of microorganisms, including foodborne pathogens. Here, it was demonstrated that their properties during synthesis can be fine tuned and optimized to further enhance their antimicrobial potential. A lab based EWNS platform was developed to enable fine-tuning of EWNS properties by modifying synthesis parameters. Characterization of EWNS properties (charge, size and ROS content) was performed using state-of-the art analytical methods. Further their microbial inactivation potential was evaluated with food related microorganisms such as Escherichia coli, Salmonella enterica, Listeria innocua, Mycobacterium parafortuitum, and Saccharomyces cerevisiae inoculated onto the surface of organic grape tomatoes. The results presented here indicate that EWNS properties can be fine-tuned during synthesis resulting in a multifold increase of the inactivation efficacy. More specifically, the surface charge quadrupled and the ROS content increased. Microbial removal rates were microorganism dependent and ranged between 1.0 to 3.8 logs after 45 mins of exposure to an EWNS aerosol dose of 40,000 #/cm3.
Pyrgiotakis, Georgios; Vedantam, Pallavi; Cirenza, Caroline; McDevitt, James; Eleftheriadou, Mary; Leonard, Stephen S; Demokritou, Philip
2016-02-15
A chemical free, nanotechnology-based, antimicrobial platform using Engineered Water Nanostructures (EWNS) was recently developed. EWNS have high surface charge, are loaded with reactive oxygen species (ROS), and can interact-with, and inactivate an array of microorganisms, including foodborne pathogens. Here, it was demonstrated that their properties during synthesis can be fine tuned and optimized to further enhance their antimicrobial potential. A lab based EWNS platform was developed to enable fine-tuning of EWNS properties by modifying synthesis parameters. Characterization of EWNS properties (charge, size and ROS content) was performed using state-of-the art analytical methods. Further their microbial inactivation potential was evaluated with food related microorganisms such as Escherichia coli, Salmonella enterica, Listeria innocua, Mycobacterium parafortuitum, and Saccharomyces cerevisiae inoculated onto the surface of organic grape tomatoes. The results presented here indicate that EWNS properties can be fine-tuned during synthesis resulting in a multifold increase of the inactivation efficacy. More specifically, the surface charge quadrupled and the ROS content increased. Microbial removal rates were microorganism dependent and ranged between 1.0 to 3.8 logs after 45 mins of exposure to an EWNS aerosol dose of 40,000 #/cm(3).
Lorias Espinoza, Daniel; Ordorica Flores, Ricardo; Minor Martínez, Arturo; Gutiérrez Gnecchi, José Antonio
2014-06-01
Various methods for evaluating laparoscopic skill have been reported, but without detailed information on the configuration used they are difficult to reproduce. Here we present a method based on the trigonometric relationships between the instruments used in a laparoscopic training platform in order to provide a tool to aid in the reproducible assessment of surgical laparoscopic technique. The positions of the instruments were represented using triangles. Basic trigonometry was used to objectively establish the distances among the working ports RL, the placement of the optical port h', and the placement of the surgical target OT. The optimal configuration of a training platform depends on the selected working angles, the intracorporeal/extracorporeal lengths of the instrument, and the depth of the surgical target. We demonstrate that some distances, angles, and positions of the instruments are inappropriate for satisfactory laparoscopy. By applying basic trigonometric principles we can determine the ideal placement of the working ports and the optics in a simple, precise, and objective way. In addition, because the method is based on parameters known to be important in both the performance and quantitative quality of laparoscopy, the results are generalizable to different training platforms and types of laparoscopic surgery.
UPC BarcelonaTech Platform. Innovative aerobatic parabolic flights for life sciences experiments.
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni; Gonzalez, Daniel
We present an innovative method of performing parabolic flights with aerobatic single-engine planes. A parabolic platform has been established in Sabadell Airport (Barcelona, Spain) to provide an infraestructure ready to allow Life Sciences reduced gravity experiments to be conducted in parabolic flights. Test flights have demonstrated that up to 8 seconds of reduced gravity can be achieved by using a two-seat CAP10B aircraft, with a gravity range between 0.1 and 0.01g in the three axis. A parabolic flight campaign may be implemented with a significant reduction in budget compared to conventional parabolic flight campaigns, and with a very short time-to-access to the platform. Operational skills and proficiency of the pilot controling the aircraft during the maneuvre, sensitivity to wind gusts, and aircraft balance are the key issues that make a parabola successful. Efforts are focused on improving the total “zero-g” time and the quality of reduced gravity achieved, as well as providing more space for experiments. We report results of test flights that have been conducted in order to optimize the quality and total microgravity time. A computer sofware has been developed and implemented to help the pilot optimize his or her performance. Finally, we summarize the life science experiments that have been conducted in this platform. Specific focus is given to the very successful 'Barcelona ZeroG Challenge', this year in its third edition. This educational contest gives undergraduate and graduate students worldwide the opportunity to design their research within our platform and test it on flight, thus becoming real researchers. We conclude that aerobatic parabolic flights have proven to be a safe, unexpensive and reliable way to conduct life sciences reduced gravity experiments.
Kinematics and dynamics analysis of a quadruped walking robot with parallel leg mechanism
NASA Astrophysics Data System (ADS)
Wang, Hongbo; Sang, Lingfeng; Hu, Xing; Zhang, Dianfan; Yu, Hongnian
2013-09-01
It is desired to require a walking robot for the elderly and the disabled to have large capacity, high stiffness, stability, etc. However, the existing walking robots cannot achieve these requirements because of the weight-payload ratio and simple function. Therefore, Improvement of enhancing capacity and functions of the walking robot is an important research issue. According to walking requirements and combining modularization and reconfigurable ideas, a quadruped/biped reconfigurable walking robot with parallel leg mechanism is proposed. The proposed robot can be used for both a biped and a quadruped walking robot. The kinematics and performance analysis of a 3-UPU parallel mechanism which is the basic leg mechanism of a quadruped walking robot are conducted and the structural parameters are optimized. The results show that performance of the walking robot is optimal when the circumradius R, r of the upper and lower platform of leg mechanism are 161.7 mm, 57.7 mm, respectively. Based on the optimal results, the kinematics and dynamics of the quadruped walking robot in the static walking mode are derived with the application of parallel mechanism and influence coefficient theory, and the optimal coordination distribution of the dynamic load for the quadruped walking robot with over-determinate inputs is analyzed, which solves dynamic load coupling caused by the branches’ constraint of the robot in the walk process. Besides laying a theoretical foundation for development of the prototype, the kinematics and dynamics studies on the quadruped walking robot also boost the theoretical research of the quadruped walking and the practical applications of parallel mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Yehia M.; Garimella, Sandilya V. B.; Prost, Spencer A.
Complex samples benefit from multidimensional measurements where higher resolution enables more complete characterization of biological and environmental systems. To address this challenge, we developed a drift tube-based ion mobility spectrometry-Orbitrap mass spectrometer (IMS-Orbitrap MS) platform. To circumvent the time scale disparity between the fast IMS separation and the much slower Orbitrap MS acquisition, we utilized a dual gate and pseudorandom sequences to multiplexed injection of ions and allowing operation in signal averaging (SA), single multiplexing (SM) and double multiplexing (DM) IMS modes to optimize the signal-to-noise ratio of the measurements. For the SM measurements, a previously developed algorithm was usedmore » to reconstruct the IMS data. A new algorithm was developed for the DM analyses involving a two-step process that first recovers the SM data and then decodes the SM data. The algorithm also performs multiple refining procedures in order to minimize demultiplexing artifacts. The new IMS-Orbitrap MS platform was demonstrated by the analysis of proteomic and petroleum samples, where the integration of IMS and high mass resolution proved essential for accurate assignment of molecular formulae.« less
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-10-20
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.
Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System
Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu
2016-01-01
Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596
Wray, Lindsay S; Rnjak-Kovacina, Jelena; Mandal, Biman B; Schmidt, Daniel F; Gil, Eun Seok; Kaplan, David L
2012-12-01
In the field of tissue engineering and regenerative medicine there is significant unmet need for critically-sized, fully degradable biomaterial scaffold systems with tunable properties for optimizing tissue formation in vitro and tissue regeneration in vivo. To address this need, we have developed a silk-based scaffold platform that has tunable material properties, including localized and bioactive functionalization, degradation rate, and mechanical properties and that provides arrays of linear hollow channels for delivery of oxygen and nutrients throughout the scaffold bulk. The scaffolds can be assembled with dimensions that range from millimeters to centimeters, addressing the need for a critically-sized platform for tissue formation. We demonstrate that the hollow channel arrays support localized and confluent endothelialization. This new platform offers a unique and versatile tool for engineering 'tailored' scaffolds for a range of tissue engineering and regenerative medicine needs. Copyright © 2012 Elsevier Ltd. All rights reserved.
A high performance scientific cloud computing environment for materials simulations
NASA Astrophysics Data System (ADS)
Jorissen, K.; Vila, F. D.; Rehr, J. J.
2012-09-01
We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.
Technology assessment for an integrated PC-based platform for three telemedicine applications
NASA Astrophysics Data System (ADS)
Tohme, Walid G.; Hayes, Wendelin S.; Dai, Hailei L.; Komo, Darmadi; Pahira, John J.; Abernethy, Darrell R.; Rennert, Wolfgang; Kuehl, Karen S.; Hauser, Gabriel J.; Mun, Seong K.
1996-05-01
This paper investigates the design and technical efficacy of an integrated PC based platform for three different medical applications. The technical efficacy of such a telemedicine platform has not been evaluated in the literature and optimal technical requirements have not been developed. The first application, with the Department of Surgery, Division of Urology, tests the utility of a telemedicine platform including radiology images for a surgical stone disease consultation service from an off site location in West Virginia. The second application, with the Department of Internal Medicine, Division of Clinical Pharmacology, investigates the usefulness of telemedicine when used for a clinical pharmacology consultation service from an off-site location. The third application, with the Department of Pediatrics, will test telemedicine for trauma care triage service first within an off-site location in Virginia and then from there to Georgetown University Medical Center.
An ultra-tunable platform for molecular engineering of high-performance crystalline porous materials
Zhai, Quan -Guo; Bu, Xianhui; Mao, Chengyu; ...
2016-12-07
Metal-organic frameworks are a class of crystalline porous materials with potential applications in catalysis, gas separation and storage, and so on. Of great importance is the development of innovative synthetic strategies to optimize porosity, composition and functionality to target specific applications. Here we show a platform for the development of metal-organic materials and control of their gas sorption properties. This platform can accommodate a large variety of organic ligands and homo- or hetero-metallic clusters, which allows for extraordinary tunability in gas sorption properties. Even without any strong binding sites, most members of this platform exhibit high gas uptake capacity. Asmore » a result, the high capacity is accomplished with an isosteric heat of adsorption as low as 20 kJ mol –1 for carbon dioxide, which could bring a distinct economic advantage because of the significantly reduced energy consumption for activation and regeneration of adsorbents.« less
Particle-based platforms for malaria vaccines.
Wu, Yimin; Narum, David L; Fleury, Sylvain; Jennings, Gary; Yadava, Anjali
2015-12-22
Recombinant subunit vaccines in general are poor immunogens likely due to the small size of peptides and proteins, combined with the lack or reduced presentation of repetitive motifs and missing complementary signal(s) for optimal triggering of the immune response. Therefore, recombinant subunit vaccines require enhancement by vaccine delivery vehicles in order to attain adequate protective immunity. Particle-based delivery platforms, including particulate antigens and particulate adjuvants, are promising delivery vehicles for modifying the way in which immunogens are presented to both the innate and adaptive immune systems. These particle delivery platforms can also co-deliver non-specific immunostimodulators as additional adjuvants. This paper reviews efforts and advances of the Particle-based delivery platforms in development of vaccines against malaria, a disease that claims over 600,000 lives per year, most of them are children under 5 years of age in sub-Sahara Africa. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mao, Wenzhe; Yuan, Peng; Zheng, Jian; Ding, Weixing; Li, Hong; Lan, Tao; Liu, Adi; Liu, Wandong; Xie, Jinlin
2016-11-01
A compact and lightweight support platform has been used as a holder for the interferometer system on the Keda Torus eXperiment (KTX), which is a reversed field pinch device. The vibration caused by the interaction between the time-varying magnetic field and the induced current driven in the metal optical components has been measured and, following comparison with the mechanical vibration of the KTX device and the refraction effect of the ambient turbulent air flow, has been identified as the primary vibration source in this case. To eliminate this electromagnetic disturbance, nonmetallic epoxy resin has been selected as the material for the support platform and the commercially available metal optical mounts are replaced. Following these optimization steps and mechanical reinforcements, the stability of the interferometer platform has improved significantly. The phase shift caused by the vibration has been reduced to the level of background noise.
Development of transportation asset management decision support tools : final report.
DOT National Transportation Integrated Search
2017-08-09
This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...
Grelewska-Nowotko, Katarzyna; Żurawska-Zajfert, Magdalena; Żmijewska, Ewelina; Sowa, Sławomir
2018-05-01
In recent years, digital polymerase chain reaction (dPCR), a new molecular biology technique, has been gaining in popularity. Among many other applications, this technique can also be used for the detection and quantification of genetically modified organisms (GMOs) in food and feed. It might replace the currently widely used real-time PCR method (qPCR), by overcoming problems related to the PCR inhibition and the requirement of certified reference materials to be used as a calibrant. In theory, validated qPCR methods can be easily transferred to the dPCR platform. However, optimization of the PCR conditions might be necessary. In this study, we report the transfer of two validated qPCR methods for quantification of maize DAS1507 and NK603 events to the droplet dPCR (ddPCR) platform. After some optimization, both methods have been verified according to the guidance of the European Network of GMO Laboratories (ENGL) on analytical method verification (ENGL working group on "Method Verification." (2011) Verification of Analytical Methods for GMO Testing When Implementing Interlaboratory Validated Methods). Digital PCR methods performed equally or better than the qPCR methods. Optimized ddPCR methods confirm their suitability for GMO determination in food and feed.
OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Greiner, Annette; Cholia, Shreyas
Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less
Roy, Emmanuel; Stewart, Gale; Mounier, Maxence; Malic, Lidija; Peytavi, Régis; Clime, Liviu; Madou, Marc; Bossinot, Maurice; Bergeron, Michel G; Veres, Teodor
2015-01-21
We present an all-thermoplastic integrated sample-to-answer centrifugal microfluidic Lab-on-Disc system (LoD) for nucleic acid analysis. The proposed CD system and engineered platform were employed for analysis of Bacillus atrophaeus subsp. globigii spores. The complete assay comprised cellular lysis, polymerase chain reaction (PCR) amplification, amplicon digestion, and microarray hybridization on a plastic support. The fluidic robustness and operating efficiency of the assay were ensured through analytical optimization of microfluidic tools enabling beneficial implementation of capillary valves and accurate control of all flow timing procedures. The assay reliability was further improved through the development of two novel microfluidic strategies for reagents mixing and flow delay on the CD platform. In order to bridge the gap between the proof-of-concept LoD and production prototype demonstration, low-cost thermoplastic elastomer (TPE) was selected as the material for CD fabrication and assembly, allowing the use of both, high quality hot-embossing and injection molding processes. Additionally, the low-temperature and pressure-free assembly and bonding properties of TPE material offer a pertinent solution for simple and efficient loading and storage of reagents and other on-board components. This feature was demonstrated through integration and conditioning of microbeads, magnetic discs, dried DNA buffer reagents and spotted DNA array inserts. Furthermore, all microfluidic functions and plastic parts were designed according to the current injection mold-making knowledge for industrialization purposes. Therefore, the current work highlights a seamless strategy that promotes a feasible path for the transfer from prototype toward realistic industrialization. This work aims to establish the full potential for TPE-based centrifugal system as a mainstream microfluidic diagnostic platform for clinical diagnosis, water and food safety, and other molecular diagnostic applications.
González, Oskar; van Vliet, Michael; Damen, Carola W N; van der Kloet, Frans M; Vreeken, Rob J; Hankemeier, Thomas
2015-06-16
The possible presence of matrix effect is one of the main concerns in liquid chromatography-mass spectrometry (LC-MS)-driven bioanalysis due to its impact on the reliability of the obtained quantitative results. Here we propose an approach to correct for the matrix effect in LC-MS with electrospray ionization using postcolumn infusion of eight internal standards (PCI-IS). We applied this approach to a generic ultraperformance liquid chromatography-time-of-flight (UHPLC-TOF) platform developed for small-molecule profiling with a main focus on drugs. Different urine samples were spiked with 19 drugs with different physicochemical properties and analyzed in order to study matrix effect (in absolute and relative terms). Furthermore, calibration curves for each analyte were constructed and quality control samples at different concentration levels were analyzed to check the applicability of this approach in quantitative analysis. The matrix effect profiles of the PCI-ISs were different: this confirms that the matrix effect is compound-dependent, and therefore the most suitable PCI-IS has to be chosen for each analyte. Chromatograms were reconstructed using analyte and PCI-IS responses, which were used to develop an optimized method which compensates for variation in ionization efficiency. The approach presented here improved the results in terms of matrix effect dramatically. Furthermore, calibration curves of higher quality are obtained, dynamic range is enhanced, and accuracy and precision of QC samples is increased. The use of PCI-ISs is a very promising step toward an analytical platform free of matrix effect, which can make LC-MS analysis even more successful, adding a higher reliability in quantification to its intrinsic high sensitivity and selectivity.
Meyer, Folker; Bagchi, Saurabh; Chaterji, Somali; Gerlach, Wolfgang; Grama, Ananth; Harrison, Travis; Paczian, Tobias; Trimble, William L; Wilke, Andreas
2017-09-26
As technologies change, MG-RAST is adapting. Newly available software is being included to improve accuracy and performance. As a computational service constantly running large volume scientific workflows, MG-RAST is the right location to perform benchmarking and implement algorithmic or platform improvements, in many cases involving trade-offs between specificity, sensitivity and run-time cost. The work in [Glass EM, Dribinsky Y, Yilmaz P, et al. ISME J 2014;8:1-3] is an example; we use existing well-studied data sets as gold standards representing different environments and different technologies to evaluate any changes to the pipeline. Currently, we use well-understood data sets in MG-RAST as platform for benchmarking. The use of artificial data sets for pipeline performance optimization has not added value, as these data sets are not presenting the same challenges as real-world data sets. In addition, the MG-RAST team welcomes suggestions for improvements of the workflow. We are currently working on versions 4.02 and 4.1, both of which contain significant input from the community and our partners that will enable double barcoding, stronger inferences supported by longer-read technologies, and will increase throughput while maintaining sensitivity by using Diamond and SortMeRNA. On the technical platform side, the MG-RAST team intends to support the Common Workflow Language as a standard to specify bioinformatics workflows, both to facilitate development and efficient high-performance implementation of the community's data analysis tasks. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.
2015-01-19
MS WINDOWS platform, which enables multitasking with simultaneous evaluation and operation 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13...measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with simultaneous...Proteus measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with
NASA Astrophysics Data System (ADS)
Sugianto, Agus; Indriani, Andi Marini
2017-11-01
Platform construction GTS (Gathering Testing Sattelite) is offshore construction platform with fix pile structure type/fixed platform functioning to support the mining of petroleum exploitation. After construction fabrication process platform was moved to barges, then shipped to the installation site. Moving process is generally done by pull or push based on construction design determined when planning. But at the time of lifting equipment/cranes available in the work area then the moving process can be done by lifting so that moving activity can be implemented more quickly of work. This analysis moving process of GTS platform in a different way that is generally done to GTS platform types by lifting using problem is construction reinforcement required, so the construction can be moved by lifting with analyzing and checking structure working stress that occurs due to construction moving process by lifting AISC code standard and analysis using the SAP2000 structure analysis program. The analysis result showed that existing condition cannot be moved by lifting because stress ratio is above maximum allowable value that is 0.950 (AISC-ASD89). Overstress occurs on the member 295 and 324 with stress ratio value 0.97 and 0.95 so that it is required structural reinforcement. Box plate aplication at both members so that it produces stress ratio values 0.78 at the member 295 and stress ratio of 0.77 at the member 324. These results indicate that the construction have qualified structural reinforcement for being moved by lifting.
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
A platform for efficient genotyping in Musa using microsatellite markers
Christelová, Pavla; Valárik, Miroslav; Hřibová, Eva; Van den houwe, Ines; Channelière, Stéphanie; Roux, Nicolas; Doležel, Jaroslav
2011-01-01
Background and aims Bananas and plantains (Musa spp.) are one of the major fruit crops worldwide with acknowledged importance as a staple food for millions of people. The rich genetic diversity of this crop is, however, endangered by diseases, adverse environmental conditions and changed farming practices, and the need for its characterization and preservation is urgent. With the aim of providing a simple and robust approach for molecular characterization of Musa species, we developed an optimized genotyping platform using 19 published simple sequence repeat markers. Methodology The genotyping system is based on 19 microsatellite loci, which are scored using fluorescently labelled primers and high-throughput capillary electrophoresis separation with high resolution. This genotyping platform was tested and optimized on a set of 70 diploid and 38 triploid banana accessions. Principal results The marker set used in this study provided enough polymorphism to discriminate between individual species, subspecies and subgroups of all accessions of Musa. Likewise, the capability of identifying duplicate samples was confirmed. Based on the results of a blind test, the genotyping system was confirmed to be suitable for characterization of unknown accessions. Conclusions Here we report on the first complex and standardized platform for molecular characterization of Musa germplasm that is ready to use for the wider Musa research and breeding community. We believe that this genotyping system offers a versatile tool that can accommodate all possible requirements for characterizing Musa diversity, and is economical for samples ranging from one to many accessions. PMID:22476494
Gupta, Sarthak; Chan, Diana W; Zaal, Kristien J; Kaplan, Mariana J
2018-01-15
Neutrophils play a key role in host defenses and have recently been implicated in the pathogenesis of autoimmune diseases by various mechanisms, including formation of neutrophil extracellular traps through a recently described distinct form of programmed cell death called NETosis. Techniques to assess and quantitate NETosis in an unbiased, reproducible, and efficient way are lacking, considerably limiting the advancement of research in this field. We optimized and validated, a new method to automatically quantify the percentage of neutrophils undergoing NETosis in real time using the IncuCyte ZOOM imaging platform and the membrane-permeability properties of two DNA dyes. Neutrophils undergoing NETosis induced by various physiological stimuli showed distinct changes, with a loss of multilobulated nuclei, as well as nuclear decondensation followed by membrane compromise, and were accurately counted by applying filters based on fluorescence intensity and nuclear size. Findings were confirmed and validated with the established method of immunofluorescence microscopy. The platform was also validated to rapidly assess and quantify the dose-dependent effect of inhibitors of NETosis. In addition, this method was able to distinguish among neutrophils undergoing NETosis, apoptosis, or necrosis based on distinct changes in nuclear morphology and membrane integrity. The IncuCyte ZOOM platform is a novel real-time assay that quantifies NETosis in a rapid, automated, and reproducible way, significantly optimizing the study of neutrophils. This platform is a powerful tool to assess neutrophil physiology and NETosis, as well as to swiftly develop and test novel neutrophil targets.
Engineering of Data Acquiring Mobile Software and Sustainable End-User Applications
NASA Technical Reports Server (NTRS)
Smith, Benton T.
2013-01-01
The criteria for which data acquiring software and its supporting infrastructure should be designed should take the following two points into account: the reusability and organization of stored online and remote data and content, and an assessment on whether abandoning a platform optimized design in favor for a multi-platform solution significantly reduces the performance of an end-user application. Furthermore, in-house applications that control or process instrument acquired data for end-users should be designed with a communication and control interface such that the application's modules can be reused as plug-in modular components in greater software systems. The application of the above mentioned is applied using two loosely related projects: a mobile application, and a website containing live and simulated data. For the intelligent devices mobile application AIDM, the end-user interface have a platform and data type optimized design, while the database and back-end applications store this information in an organized manner and manage access to that data to only to authorized user end application(s). Finally, the content for the website was derived from a database such that the content can be included and uniform to all applications accessing the content. With these projects being ongoing, I have concluded from my research that the applicable methods presented are feasible for both projects, and that a multi-platform design for the mobile application only marginally drop the performance of the mobile application.
A Quad-Cantilevered Plate micro-sensor for intracranial pressure measurement.
Lalkov, Vasko; Qasaimeh, Mohammad A
2017-07-01
This paper proposes a new design for pressure-sensing micro-plate platform to bring higher sensitivity to a pressure sensor based on piezoresistive MEMS sensing mechanism. The proposed design is composed of a suspended plate having four stepped cantilever beams connected to its corners, and thus defined as Quad-Cantilevered Plate (QCP). Finite element analysis was performed to determine the optimal design for sensitivity and structural stability under a range of applied forces. Furthermore, a piezoresistive analysis was performed to calculate sensor sensitivity. Both the maximum stress and the change in resistance of the piezoresistor associated with the QCP were found to be higher compared to previously published designs, and linearly related to the applied pressure as desired. Therefore, the QCP demonstrates greater sensitivity, and could be potentially used as an efficient pressure sensor for intracranial pressure measurement.
Cleverley, Steve; Chen, Irene; Houle, Jean-François
2010-01-15
Immunoaffinity approaches remain invaluable tools for characterization and quantitation of biopolymers. Their application in separation science is often limited due to the challenges of immunoassay development. Typical end-point immunoassays require time consuming and labor-intensive approaches for optimization. Real-time label-free analysis using diffractive optics technology (dot) helps guide a very effective iterative process for rapid immunoassay development. Both label-free and amplified approaches can be used throughout feasibility testing and ultimately in the final assay, providing a robust platform for biopolymer analysis over a very broad dynamic range. We demonstrate the use of dot in rapidly developing assays for quantitating (1) human IgG in complex media, (2) a fusion protein in production media and (3) protein A contamination in purified immunoglobulin preparations. 2009 Elsevier B.V. All rights reserved.
Loh, Leslie J; Bandara, Gayan C; Weber, Genevieve L; Remcho, Vincent T
2015-08-21
Due to the rapid expansion in hydraulic fracturing (fracking), there is a need for robust, portable and specific water analysis techniques. Early detection of contamination is crucial for the prevention of lasting environmental damage. Bromide can potentially function as an early indicator of water contamination by fracking waste, because there is a high concentration of bromide ions in fracking wastewaters. To facilitate this, a microfluidic paper-based analytical device (μPAD) has been developed and optimized for the quantitative colorimetric detection of bromide in water using a smartphone. A paper microfluidic platform offers the advantages of inexpensive fabrication, elimination of unstable wet reagents, portability and high adaptability for widespread distribution. These features make this assay an attractive option for a new field test for on-site determination of bromide.
Wu, Ching-Sung; Hu, Kuang-Hua; Chen, Fu-Hsiang
2016-01-01
The development of high-tech industry has been prosperous around the world in past decades, while technology and finance have already become the most significant issues in the information era. While high-tech firms are a major force behind a country's economic development, it requires a lot of money for the development process, as well as the financing difficulties for its potential problems, thus, how to evaluate and establish appropriate technology and financial services platforms innovation strategy has become one of the most critical and difficult issues. Moreover, how the chosen intertwined financial environment can be optimized in order that high-tech firms financing problems can be decided has seldom been addressed. Thus, this research aims to establish a technology and financial services platform innovation strategy improvement model, as based on the hybrid MADM model, which addresses the main causal factors and amended priorities in order to strengthen ongoing planning. A DEMATEL technique, as based on Analytic Network Process, as well as modified VIKOR, will be proposed for selecting and re-configuring the aspired technology and financial services platform. An empirical study, as based on China's technology and financial services platform innovation strategy, will be provided for verifying the effectiveness of this proposed methodology. Based on expert interviews, technology and financial services platforms innovation strategy improvement should be made in the following order: credit guarantee platform ( C )_credit rating platform ( B )_investment and finance platform ( A ).
Study of the Polarization Strategy for Electron Cyclotron Heating Systems on HL-2M
NASA Astrophysics Data System (ADS)
Zhang, F.; Huang, M.; Xia, D. H.; Song, S. D.; Wang, J. Q.; Huang, B.; Wang, H.
2016-06-01
As important components integrated in transmission lines of electron cyclotron heating systems, polarizers are mainly used to obtain the desired polarization for highly efficient coupling between electron cyclotron waves and plasma. The polarization strategy for 105-GHz electron cyclotron heating systems of HL-2M tokamak is studied in this paper. Considering the polarizers need high efficiency, stability, and low loss to realize any polarization states, two sinusoidal-grooved polarizers, which include a linear polarizer and an elliptical polarizer, are designed with the coordinate transformation method. The parameters, the period p and the depth d, of two sinusoidal-grooved polarizers are optimized by a phase difference analysis method to achieve an almost arbitrary polarization. Finally, the optimized polarizers are manufactured and their polarization characteristics are tested with a low-power test platform. The experimental results agree well with the numerical calculations, indicating that the designed polarizers can meet the polarization requirements of the electron cyclotron heating systems of HL-2M tokamak.
Space Construction System Analysis. Special Emphasis Studies
NASA Technical Reports Server (NTRS)
1979-01-01
Generic concepts were analyzed to determine: (1) the maximum size of a deployable solar array which might be packaged into a single orbit payload bay; (2) the optimal overall shape of a large erectable structure for large satellite projects; (3) the optimization of electronic communication with emphasis on the number of antennas and their diameters; and (4) the number of beams, traffic growth, and projections and frequencies were found feasible to package a deployable solar array which could generate over 250 kilowatts of electrical power. Also, it was found that the linear-shaped erectable structure is better for ease of construction and installation of systems, and compares favorably on several other counts. The study of electronic communication technology indicated that proliferation of individual satellites will crowd the spectrum by the early 1990's, so that there will be a strong tendency toward a small number of communications platforms over the continental U.S.A. with many antennas and multiple spot beams.
Walter, James S; Posluszny, Joseph; Dieter, Raymond; Dieter, Robert S; Sayers, Scott; Iamsakul, Kiratipath; Staunton, Christine; Thomas, Donald; Rabbat, Mark; Singh, Sanjay
2018-05-01
To optimize maximal respiratory responses with surface stimulation over abdominal and upper thorax muscles and using a 12-Channel Neuroprosthetic Platform. Following instrumentation, six anesthetized adult canines were hyperventilated sufficiently to produce respiratory apnea. Six abdominal tests optimized electrode arrangements and stimulation parameters using bipolar sets of 4.5 cm square electrodes. Tests in the upper thorax optimized electrode locations, and forelimb moment was limited to slight-to-moderate. During combined muscle stimulation tests, the upper thoracic was followed immediately by abdominal stimulation. Finally, a model of glottal closure for cough was conducted with the goal of increased peak expiratory flow. Optimized stimulation of abdominal muscles included three sets of bilateral surface electrodes located 4.5 cm dorsal to the lateral line and from the 8 th intercostal space to caudal to the 13 th rib, 80 or 100 mA current, and 50 Hz stimulation frequency. The maximal expired volume was 343 ± 23 ml (n=3). Optimized upper thorax stimulation included a single bilateral set of electrodes located over the 2 nd interspace, 60 to 80 mA, and 50 Hz. The maximal inspired volume was 304 ± 54 ml (n=4). Sequential stimulation of the two muscles increased the volume to 600 ± 152 ml (n=2), and the glottal closure maneuver increased the flow. Studies in an adult canine model identified optimal surface stimulation methods for upper thorax and abdominal muscles to induce sufficient volumes for ventilation and cough. Further study with this neuroprosthetic platform is warranted.
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer.
Mino-Kenudson, Mari
2017-10-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC.
JacksonBot - Design, Simulation and Optimal Control of an Action Painting Robot
NASA Astrophysics Data System (ADS)
Raschke, Michael; Mombaur, Katja; Schubert, Alexander
We present the robotics platform JacksonBot which is capable to produce paintings inspired by the Action Painting style of Jackson Pollock. A dynamically moving robot arm splashes color from a container at the end effector on the canvas. The paintings produced by this platform rely on a combination of the algorithmic generation of robot arm motions with random effects of the splashing color. The robot can be considered as a complex and powerful tool to generate art works programmed by a user. Desired end effector motions can be prescribed either by mathematical functions, by point sequences or by data glove motions. We have evaluated the effect of different shapes of input motions on the resulting painting. In order to compute the robot joint trajectories necessary to move along a desired end effector path, we use an optimal control based approach to solve the inverse kinematics problem.
MagLIF Pre-Heat Optimization on the PECOS Surrogacy Platform
NASA Astrophysics Data System (ADS)
Geissel, Matthias; Harvey-Thompson, A. J.; Ampleford, D.; Awe, T. J.; Bliss, D. E.; Glinsky, M. E.; Gomez, M. R.; Harding, E.; Hansen, S. B.; Jennings, C.; Kimmel, M. W.; Knapp, P. F.; Lewis, S. M.; Peterson, K.; Rochau, G. A.; Schollmeier, M.; Schwarz, J.; Shores, J. E.; Slutz, S. A.; Sinars, D. B.; Smith, I. C.; Speas, C. S.; Vesey, R. A.; Weis, M. R.; Porter, J. L.
2017-10-01
Sandia's MagLIF Program is using the PECOS target area as a platform to optimize the coupling of laser energy into the fuel. After developing laser pulse shapes that reduced SBS and improved energy deposition (presented last year), we will report on the effect on integrated experiments with Z. Despite encouraging results, questions remained about the equivalency of He, (PECOS studies), versus D2 (Z). Furthermore, simulations imply that the goal of at least 1 kJ in the fuel will require higher pressures, requiring a re-design of the gas delivery system. We will present recent results for backscatter measurements and energy deposition profiles in 60 psi and 90 psi deuterium fills and compare them to previously studied helium fills. Sandia National Labs is managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a subsidiary of Honeywell International, Inc., for U.S. DoE/NNSA under contract DE-NA0003525.
Immunohistochemistry for predictive biomarkers in non-small cell lung cancer
2017-01-01
In the era of targeted therapy, predictive biomarker testing has become increasingly important for non-small cell lung cancer. Of multiple predictive biomarker testing methods, immunohistochemistry (IHC) is widely available and technically less challenging, can provide clinically meaningful results with a rapid turn-around-time and is more cost efficient than molecular platforms. In fact, several IHC assays for predictive biomarkers have already been implemented in routine pathology practice. In this review, we will discuss: (I) the details of anaplastic lymphoma kinase (ALK) and proto-oncogene tyrosine-protein kinase ROS (ROS1) IHC assays including the performance of multiple antibody clones, pros and cons of IHC platforms and various scoring systems to design an optimal algorithm for predictive biomarker testing; (II) issues associated with programmed death-ligand 1 (PD-L1) IHC assays; (III) appropriate pre-analytical tissue handling and selection of optimal tissue samples for predictive biomarker IHC. PMID:29114473
Wu, Xin; Koslowski, Axel; Thiel, Walter
2012-07-10
In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.
Panjikar, Santosh; Parthasarathy, Venkataraman; Lamzin, Victor S; Weiss, Manfred S; Tucker, Paul A
2005-04-01
The EMBL-Hamburg Automated Crystal Structure Determination Platform is a system that combines a number of existing macromolecular crystallographic computer programs and several decision-makers into a software pipeline for automated and efficient crystal structure determination. The pipeline can be invoked as soon as X-ray data from derivatized protein crystals have been collected and processed. It is controlled by a web-based graphical user interface for data and parameter input, and for monitoring the progress of structure determination. A large number of possible structure-solution paths are encoded in the system and the optimal path is selected by the decision-makers as the structure solution evolves. The processes have been optimized for speed so that the pipeline can be used effectively for validating the X-ray experiment at a synchrotron beamline.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Cho, Hakyung; Lee, Joo Hyeon
2015-09-01
Smart clothing is a sort of wearable device used for ubiquitous health monitoring. It provides comfort and efficiency in vital sign measurements and has been studied and developed in various types of monitoring platforms such as T-shirt and sports bra. However, despite these previous approaches, smart clothing for electrocardiography (ECG) monitoring has encountered a serious shortcoming relevant to motion artifacts caused by wearer movement. In effect, motion artifacts are one of the major problems in practical implementation of most wearable health-monitoring devices. In the ECG measurements collected by a garment, motion artifacts are usually caused by improper location of the electrode, leading to lack of contact between the electrode and skin with body motion. The aim of this study was to suggest a design for ECG-monitoring clothing contributing to reduction of motion artifacts. Based on the clothing science theory, it was assumed in this study that the stability of the electrode in a dynamic state differed depending on the electrode location in an ECG-monitoring garment. Founded on this assumption, effects of 56 electrode positions were determined by sectioning the surface of the garment into grids with 6 cm intervals in the front and back of the bodice. In order to determine the optimal locations of the ECG electrodes from the 56 positions, ECG measurements were collected from 10 participants at every electrode position in the garment while the wearer was in motion. The electrode locations indicating both an ECG measurement rate higher than 80.0 % and a large amplitude during motion were selected as the optimal electrode locations. The results of this analysis show four electrode locations with consistently higher ECG measurement rates and larger amplitudes amongst the 56 locations. These four locations were abstracted to be least affected by wearer movement in this research. Based on this result, a design of the garment-formed ECG monitoring platform reflecting the optimal positions of the electrode was suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Munk, Jeffrey D; Gehl, Anthony C
2015-06-01
A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed onmore » June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study« less
MSIX - A general and user-friendly platform for RAM analysis
NASA Astrophysics Data System (ADS)
Pan, Z. J.; Blemel, Peter
The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.
Silveira, Augusta; Gonçalves, Joaquim; Sequeira, Teresa; Ribeiro, Cláudia; Lopes, Carlos; Monteiro, Eurico; Pimentel, Francisco Luís
2011-12-01
Quality of Life is a distinct and important emerging health focus, guiding practice and research. The routine Quality of Life evaluation in clinical, economic, and epidemiological studies and in medical practice promises a better Quality of Life and improved health resources optimization. The use of information technology and a Knowledge Management System related to Quality of Life assessment is essential to routine clinical evaluation and can define a clinical research methodology that is more efficient and better organized. In this paper, a Validation Model using the Quality of Life informatics platform is presented. Portuguese PC-software using European Organization for Research and Treatment of Cancer questionnaires (EORTC-QLQ C30 and EORTC-H&N35), is compared with the original paper-pen approach in the Quality of Life monitoring of head and neck cancer patients. The Quality of Life informatics platform was designed specifically for this study with a simple and intuitive interface that ensures confidentiality while providing Quality of Life evaluation for all cancer patients. For the Validation Model, the sample selection was random. Fifty-four head and neck cancer patients completed 216 questionnaires (108 using the informatics platform and 108 using the original paper-pen approach) with a one-hour interval in between. Patient preferences and computer experience were registered. Quality of Life informatics platform showed high usability as a user-friendly tool. This informatics platform allows data collection by auto-reply, database construction, and statistical data analysis and also facilitates the automatic listing of the questionnaires. When comparing the approaches (Wilcoxon test by item, percentile distribution and Cronbach's alpha), most of the responses were similar. Most of the patients (53.6%) reported a preference for the software version. The Quality of Life informatics platform has revealed to be a powerful and effective tool, allowing a real time analysis of Quality of Life data. Computer-based quality-of-life monitoring in head and neck cancer patients is essential to get clinically meaningful data that can support clinical decisions, identify potential needs, and support a stepped-care model. This represents a fundamental step for routine Quality of Life implementation in the Oncology Portuguese Institute (IPO-Porto), ORL and C&P department services clinical practice. Finally, we propose a diagram of diagnostic performance, considerating the generalized lack of mycological diagnosis in Portugal, which emphasizes the need for a careful history, focused on quantifying the latency period.