Strengthening the revenue cycle: a 4-step method for optimizing payment.
Clark, Jonathan J
2008-10-01
Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations.
Optimization design of turbo-expander gas bearing for a 500W helium refrigerator
NASA Astrophysics Data System (ADS)
Li, S. S.; Fu, B.; Y Zhang, Q.
2017-12-01
Turbo-expander is the core machinery of the helium refrigerator. Bearing as the supporting element is the core technology to impact the design of turbo-expander. The perfect design and performance study for the gas bearing are essential to ensure the stability of turbo-expander. In this paper, numerical simulation is used to analyze the performance of gas bearing for a 500W helium refrigerator turbine, and the optimization design of the gas bearing has been completed. And the results of the gas bearing optimization have a guiding role in the processing technology. Finally, the turbine experiments verify that the gas bearing has good performance, and ensure the stable operation of the turbine.
Remmelink, M; Sokolow, Y; Leduc, D
2015-04-01
Histopathology is key to the diagnosis and staging of lung cancer. This analysis requires tissue sampling from primary and/or metastatic lesions. The choice of sampling technique is intended to optimize diagnostic yield while avoiding unnecessarily invasive procedures. Recent developments in targeted therapy require increasingly precise histological and molecular characterization of the tumor. Therefore, pathologists must be economical with tissue samples to ensure that they have the opportunity to perform all the analyses required. More than ever, good communication between clinician, endoscopist or surgeon, and pathologist is essential. This is necessary to ensure that all participants in the process of lung cancer diagnosis collaborate to ensure that the appropriate number and type of biopsies are performed with the appropriate tissue sampling treatment. This will allow performance of all the necessary analyses leading to a more precise characterization of the tumor, and thus the optimal treatment for patients with lung cancer. Copyright © 2015 SPLF. Published by Elsevier Masson SAS. All rights reserved.
Mathew, Ribu; Sankar, A Ravi
2018-05-01
In this paper, we present the design and optimization of a rectangular piezoresistive composite silicon dioxide nanocantilever sensor. Unlike the conventional design approach, we perform the sensor optimization by not only considering its electro-mechanical response but also incorporating the impact of self-heating induced thermal drift in its terminal characteristics. Through extensive simulations first we comprehend and quantify the inaccuracies due to self-heating effect induced by the geometrical and intrinsic parameters of the piezoresistor. Then, by optimizing the ratio of electrical sensitivity to thermal sensitivity defined as the sensitivity ratio (υ) we improve the sensor performance and measurement reliability. Results show that to ensure υ ≥ 1, shorter and wider piezoresistors are better. In addition, it is observed that unlike the general belief that high doping concentration of piezoresistor reduces thermal sensitivity in piezoresistive sensors, to ensure υ ≥ 1 doping concentration (p) should be in the range: 1E18 cm-3 ≤ p ≤ 1E19 cm-3. Finally, we provide a set of design guidelines that will help NEMS engineers to optimize the performance of such sensors for chemical and biological sensing applications.
Intelligent Optimization of Modulation Indexes in Unified Tracking and Communication System
NASA Astrophysics Data System (ADS)
Yang, Wei-wei; Cong, Bo; Huang, Qiong; Zhu, Li-wei
2016-02-01
In the unified tracking and communication system, the ranging signal and the telemetry, communication signals are used in the same channel. In the link budget, it is necessary to allocate the power reasonably, so as to ensure the performance of system and reduce the cost. In this paper, the nonlinear optimization problem is studied using intelligent optimization method. Simulation analysis results show that the proposed method is effective.
Meuter, Renata F I; Lacherez, Philippe F
2016-03-01
We aimed to assess the impact of task demands and individual characteristics on threat detection in baggage screeners. Airport security staff work under time constraints to ensure optimal threat detection. Understanding the impact of individual characteristics and task demands on performance is vital to ensure accurate threat detection. We examined threat detection in baggage screeners as a function of event rate (i.e., number of bags per minute) and time on task across 4 months. We measured performance in terms of the accuracy of detection of Fictitious Threat Items (FTIs) randomly superimposed on X-ray images of real passenger bags. Analyses of the percentage of correct FTI identifications (hits) show that longer shifts with high baggage throughput result in worse threat detection. Importantly, these significant performance decrements emerge within the first 10 min of these busy screening shifts only. Longer shift lengths, especially when combined with high baggage throughput, increase the likelihood that threats go undetected. Shorter shift rotations, although perhaps difficult to implement during busy screening periods, would ensure more consistently high vigilance in baggage screeners and, therefore, optimal threat detection and passenger safety. © 2015, Human Factors and Ergonomics Society.
Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump
NASA Astrophysics Data System (ADS)
Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.
2017-04-01
In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.
Current performance of planter technology to support variable-rate seeding in the Southern US
USDA-ARS?s Scientific Manuscript database
Advances in planting technology are expanding opportunities to vary seeding rates on–the-go. Variable-rate seeding can help maximize overall profits by matching optimal planting rates to field production variability. An important aspect of variable-rate seeding is ensuring peak performance of the pl...
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Advances in traffic data collection and management : white paper.
DOT National Transportation Integrated Search
2003-01-31
This white paper identifies innovative approaches for improving data quality through Quality Control. Quality Control emphasizes good data by ensuring selection of the most accurate detector then optimizing detector system performance. This is contra...
NASA Technical Reports Server (NTRS)
Welstead, Jason
2014-01-01
This research focused on incorporating stability and control into a multidisciplinary de- sign optimization on a Boeing 737-class advanced concept called the D8.2b. A new method of evaluating the aircraft handling performance using quantitative evaluation of the sys- tem to disturbances, including perturbations, continuous turbulence, and discrete gusts, is presented. A multidisciplinary design optimization was performed using the D8.2b transport air- craft concept. The con guration was optimized for minimum fuel burn using a design range of 3,000 nautical miles. Optimization cases were run using xed tail volume coecients, static trim constraints, and static trim and dynamic response constraints. A Cessna 182T model was used to test the various dynamic analysis components, ensuring the analysis was behaving as expected. Results of the optimizations show that including stability and con- trol in the design process drastically alters the optimal design, indicating that stability and control should be included in conceptual design to avoid system level penalties later in the design process.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Advanced CHP Control Algorithms: Scope Specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
NASA Astrophysics Data System (ADS)
Escartin, Terenz R.; Nano, Tomi F.; Cunningham, Ian A.
2016-03-01
The detective quantum efficiency (DQE), expressed as a function of spatial frequency, describes the ability of an x-ray detector to produce high signal-to-noise ratio (SNR) images. While regulatory and scientific communities have used the DQE as a primary metric for optimizing detector design, the DQE is rarely used by end users to ensure high system performance is maintained. Of concern is that image quality varies across different systems for the same exposures with no current measures available to describe system performance. Therefore, here we conducted an initial DQE measurement survey of clinical x-ray systems using a DQE-testing instrument to identify their range of performance. Following laboratory validation, experiments revealed that the DQE of five different systems under the same exposure level (8.0 μGy) ranged from 0.36 to 0.75 at low spatial frequencies, and 0.02 to 0.4 at high spatial frequencies (3.5 cycles/mm). Furthermore, the DQE dropped substantially with decreasing detector exposure by a factor of up to 1.5x in the lowest spatial frequency, and a factor of 10x at 3.5 cycles/mm due to the effect of detector readout noise. It is concluded that DQE specifications in purchasing decisions, combined with periodic DQE testing, are important factors to ensure patients receive the health benefits of high-quality images for low x-ray exposures.
Joint optimization: Merging a new culture with a new physical environment.
Stichler, Jaynelle F; Ecoff, Laurie
2009-04-01
Nearly $200 billion of healthcare construction is expected by the year 2015, and nurse leaders must expand their knowledge and capabilities in healthcare design. This bimonthly department prepares nurse leaders to use the evidence-based design process to ensure that new, expanded, and renovated hospitals facilitate optimal patient outcomes, enhance the work environment for healthcare providers, and improve organizational performance. In this article, the authors discuss the concept of joint optimization of merging organizational culture with a new hospital facility.
ERIC Educational Resources Information Center
Pociask, Sarah; Gross, David; Shih, Mei-Yau
2017-01-01
The literature on team-based learning emphasizes the importance of team composition and team design, and it is recommended that instructors organize teams to ensure diversity of team members and optimal team performance. But does the method of team formation actually impact student performance? The goal of the present study was to examine whether…
Elevating the role of finance at Mary Lanning Healthcare.
Hoffman, Amanda; Spence, Jay
2013-11-01
To effectively partner with hospital operations leaders, healthcare finance leaders should: Streamline and align financial planning and budgeting functions across the organization; Ensure capital planning is regarded as a strategic process; Optimize performance monitoring across management levels.
Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications
NASA Technical Reports Server (NTRS)
Aldrich, Jack B.; Okon, Avi B.
2012-01-01
The need to maintain optimal energy efficiency is critical during the drilling operations performed on future and current planetary rover missions (see figure). Specifically, this innovation seeks to solve the following problem. Given a spring-loaded percussive drill driven by a voice-coil motor, one needs to determine the optimal input voltage waveform (periodic function) and the optimal hammering period that minimizes the dissipated energy, while ensuring that the hammer-to-rock impacts are made with sufficient (user-defined) impact velocity (or impact energy). To solve this problem, it was first observed that when voice-coil-actuated percussive drills are driven at high power, it is of paramount importance to ensure that the electrical current of the device remains in phase with the velocity of the hammer. Otherwise, negative work is performed and the drill experiences a loss of performance (i.e., reduced impact energy) and an increase in Joule heating (i.e., reduction in energy efficiency). This observation has motivated many drilling products to incorporate the standard bang-bang control approach for driving their percussive drills. However, the bang-bang control approach is significantly less efficient than the optimal energy-efficient control approach solved herein. To obtain this solution, the standard tools of classical optimal control theory were applied. It is worth noting that these tools inherently require the solution of a two-point boundary value problem (TPBVP), i.e., a system of differential equations where half the equations have unknown boundary conditions. Typically, the TPBVP is impossible to solve analytically for high-dimensional dynamic systems. However, for the case of the spring-loaded vibro-impactor, this approach yields the exact optimal control solution as the sum of four analytic functions whose coefficients are determined using a simple, easy-to-implement algorithm. Once the optimal control waveform is determined, it can be used optimally in the context of both open-loop and closed-loop control modes (using standard realtime control hardware).
Optimization of a pressure control valve for high power automatic transmission considering stability
NASA Astrophysics Data System (ADS)
Jian, Hongchao; Wei, Wei; Li, Hongcai; Yan, Qingdong
2018-02-01
The pilot-operated electrohydraulic clutch-actuator system is widely utilized by high power automatic transmission because of the demand of large flowrate and the excellent pressure regulating capability. However, a self-excited vibration induced by the inherent non-linear characteristics of valve spool motion coupled with the fluid dynamics can be generated during the working state of hydraulic systems due to inappropriate system parameters, which causes sustaining instability in the system and leads to unexpected performance deterioration and hardware damage. To ensure a stable and fast response performance of the clutch actuator system, an optimal design method for the pressure control valve considering stability is proposed in this paper. A non-linear dynamic model of the clutch actuator system is established based on the motion of the valve spool and coupling fluid dynamics in the system. The stability boundary in the parameter space is obtained by numerical stability analysis. Sensitivity of the stability boundary and output pressure response time corresponding to the valve parameters are identified using design of experiment (DOE) approach. The pressure control valve is optimized using particle swarm optimization (PSO) algorithm with the stability boundary as constraint. The simulation and experimental results reveal that the optimization method proposed in this paper helps in improving the response characteristics while ensuring the stability of the clutch actuator system during the entire gear shift process.
NASA Astrophysics Data System (ADS)
Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah
2017-04-01
Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.
NASA Astrophysics Data System (ADS)
Wang, Fengwen; Jensen, Jakob S.; Sigmund, Ole
2012-10-01
Photonic crystal waveguides are optimized for modal confinement and loss related to slow light with high group index. A detailed comparison between optimized circular-hole based waveguides and optimized waveguides with free topology is performed. Design robustness with respect to manufacturing imperfections is enforced by considering different design realizations generated from under-, standard- and over-etching processes in the optimization procedure. A constraint ensures a certain modal confinement, and loss related to slow light with high group index is indirectly treated by penalizing field energy located in air regions. It is demonstrated that slow light with a group index up to ng = 278 can be achieved by topology optimized waveguides with promising modal confinement and restricted group-velocity-dispersion. All the topology optimized waveguides achieve a normalized group-index bandwidth of 0.48 or above. The comparisons between circular-hole based designs and topology optimized designs illustrate that the former can be efficient for dispersion engineering but that larger improvements are possible if irregular geometries are allowed.
Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.
Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas
2011-06-24
This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
NASA Astrophysics Data System (ADS)
Wallis, Eric; Griffin, Todd M.; Popkie, Norm, Jr.; Eagan, Michael A.; McAtee, Robert F.; Vrazel, Danet; McKinly, Jim
2005-05-01
Ion Mobility Spectroscopy (IMS) is the most widespread detection technique in use by the military for the detection of chemical warfare agents, explosives, and other threat agents. Moreover, its role in homeland security and force protection has expanded due, in part, to its good sensitivity, low power, lightweight, and reasonable cost. With the increased use of IMS systems as continuous monitors, it becomes necessary to develop tools and methodologies to ensure optimal performance over a wide range of conditions and extended periods of time. Namely, instrument calibration is needed to ensure proper sensitivity and to correct for matrix or environmental effects. We have developed methodologies to deal with the semi-quantitative nature of IMS and allow us to generate response curves that allow a gauge of instrument performance and maintenance requirements. This instrumentation communicates to the IMS systems via a software interface that was developed in-house. The software measures system response, logs information to a database, and generates the response curves. This paper will discuss the instrumentation, software, data collected, and initial results from fielded systems.
Microgrid Enabled Distributed Energy Solutions (MEDES) Fort Bliss Military Reservation
2014-02-01
Logic Controller PF Power Factor PO Performance Objectives PPA Power Purchase Agreements PV Photovoltaic R&D Research and Development RDSI...controller, algorithms perform power flow analysis, short term optimization, and long-term forecasted planning. The power flow analysis ensures...renewable photovoltaic power and energy storage in this microgrid configuration, the available mission operational time of the backup generator can be
Characterization and nultivariate analysis of physical properties of processing peaches
USDA-ARS?s Scientific Manuscript database
Characterization of physical properties of fruits represents the first vital step to ensure optimal performance of fruit processing operations and is also a prerequisite in the development of new processing equipment. In this study, physical properties of engineering significance to processing of th...
Bioreactor performance: a more scientific approach for practice.
Lübbert, A; Bay Jørgensen, S
2001-02-13
In practice, the performance of a biochemical conversion process, i.e. the bioreactor performance, is essentially determined by the benefit/cost ratio. The benefit is generally defined in terms of the amount of the desired product produced and its market price. Cost reduction is the major objective in biochemical engineering. There are two essential engineering approaches to minimizing the cost of creating a particular product in an existing plant. One is to find a control path or operational procedure that optimally uses the dynamics of the process and copes with the many constraints restricting production. The other is to remove or lower the constraints by constructive improvements of the equipment and/or the microorganisms. This paper focuses on the first approach, dealing with optimization of the operational procedure and the measures by which one can ensure that the process adheres to the predetermined path. In practice, feedforward control is the predominant control mode applied. However, as it is frequently inadequate for optimal performance, feedback control may also be employed. Relevant aspects of such performance optimization are discussed.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
Zimmerman, Janice L; Sprung, Charles L
2010-04-01
To provide recommendations and standard operating procedures for intensive care unit and hospital preparations for an influenza pandemic or mass disaster with a specific focus on ensuring that adequate resources are available and appropriate protocols are developed to safely perform procedures in patients with and without influenza illness. Based on a literature review and expert opinion, a Delphi process was used to define the essential topics including performing medical procedures. Key recommendations include: (1) specify high-risk procedures (aerosol generating-procedures); (2) determine if certain procedures will not be performed during a pandemic; (3) develop protocols for safe performance of high-risk procedures that include appropriateness, qualifications of personnel, site, personal protection equipment, safe technique and equipment needs; (4) ensure adequate training of personnel in high-risk procedures; (5) procedures should be performed at the bedside whenever possible; (6) ensure safe respiratory therapy practices to avoid aerosols; (7) provide safe respiratory equipment; and (8) determine criteria for cancelling and/or altering elective procedures. Judicious planning and adoption of protocols for safe performance of medical procedures are necessary to optimize outcomes during a pandemic.
Neon reduction program on Cymer ArF light sources
NASA Astrophysics Data System (ADS)
Kanawade, Dinesh; Roman, Yzzer; Cacouris, Ted; Thornes, Josh; O'Brien, Kevin
2016-03-01
In response to significant neon supply constraints, Cymer has responded with a multi-part plan to support its customers. Cymer's primary objective is to ensure that reliable system performance is maintained while minimizing gas consumption. Gas algorithms were optimized to ensure stable performance across all operating conditions. The Cymer neon support plan contains four elements: 1. Gas reduction program to reduce neon by >50% while maintaining existing performance levels and availability; 2. short-term containment solutions for immediate relief. 3. qualification of additional gas suppliers; and 4. long-term recycling/reclaim opportunity. The Cymer neon reduction program has shown excellent results as demonstrated through the comparison on standard gas use versus the new >50% reduced neon performance for ArF immersion light sources. Testing included stressful conditions such as repetition rate, duty cycle and energy target changes. No performance degradation has been observed over typical gas lives.
A Wireless Communications Laboratory on Cellular Network Planning
ERIC Educational Resources Information Center
Dawy, Z.; Husseini, A.; Yaacoub, E.; Al-Kanj, L.
2010-01-01
The field of radio network planning and optimization (RNPO) is central for wireless cellular network design, deployment, and enhancement. Wireless cellular operators invest huge sums of capital on deploying, launching, and maintaining their networks in order to ensure competitive performance and high user satisfaction. This work presents a lab…
Nova Scotia Teachers' ADHD Knowledge, Beliefs, and Classroom Management Practices
ERIC Educational Resources Information Center
Blotnicky-Gallant, Pamela; Martin, Cheron; McGonnell, Melissa; Corkum, Penny
2015-01-01
Attention-deficit/hyperactivity disorder (ADHD) has a significant impact on children's social, emotional, and academic performance in school, and as such, teachers are in a good position to provide evidence-based interventions to help ensure optimal adjustment of their students. The current study examined teachers' knowledge and beliefs about…
Considerations on the Optimal and Efficient Processing of Information-Bearing Signals
ERIC Educational Resources Information Center
Harms, Herbert Andrew
2013-01-01
Noise is a fundamental hurdle that impedes the processing of information-bearing signals, specifically the extraction of salient information. Processing that is both optimal and efficient is desired; optimality ensures the extracted information has the highest fidelity allowed by the noise, while efficiency ensures limited resource usage. Optimal…
Bravo, Teresa; Maury, Cédric; Pinhède, Cédric
2013-11-01
Theoretical and experimental results are presented into the sound absorption and transmission properties of multi-layer structures made up of thin micro-perforated panels (ML-MPPs). The objective is to improve both the absorption and insulation performances of ML-MPPs through impedance boundary optimization. A fully coupled modal formulation is introduced that predicts the effect of the structural resonances onto the normal incidence absorption coefficient and transmission loss of ML-MPPs. This model is assessed against standing wave tube measurements and simulations based on impedance translation method for two double-layer MPP configurations of relevance in building acoustics and aeronautics. Optimal impedance relationships are proposed that ensure simultaneous maximization of both the absorption and the transmission loss under normal incidence. Exhaustive optimization of the double-layer MPPs is performed to assess the absorption and/or transmission performances with respect to the impedance criterion. It is investigated how the panel volumetric resonances modify the excess dissipation that can be achieved from non-modal optimization of ML-MPPs.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
2012-07-01
detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS
Direct labeling of serum proteins by fluorescent dye for antibody microarray.
Klimushina, M V; Gumanova, N G; Metelskaya, V A
2017-05-06
Analysis of serum proteome by antibody microarray is used to identify novel biomarkers and to study signaling pathways including protein phosphorylation and protein-protein interactions. Labeling of serum proteins is important for optimal performance of the antibody microarray. Proper choice of fluorescent label and optimal concentration of protein loaded on the microarray ensure good quality of imaging that can be reliably scanned and processed by the software. We have optimized direct serum protein labeling using fluorescent dye Arrayit Green 540 (Arrayit Corporation, USA) for antibody microarray. Optimized procedure produces high quality images that can be readily scanned and used for statistical analysis of protein composition of the serum. Copyright © 2017 Elsevier Inc. All rights reserved.
Initial Ares I Bending Filter Design
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Bedrossian, Nazareth; Hall, Robert; Norris, H. Lee; Hall, Charles; Jackson, Mark
2007-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output will be required to ensure control system stability and adequate performance. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The filter design methodology was based on a numerical constrained optimization approach to maximize stability margins while meeting performance requirements. The resulting bending filter designs achieved stability by adding lag to the first structural frequency and hence phase stabilizing the first Ares-I flex mode. To minimize rigid body performance impacts, a priority was placed via constraints in the optimization algorithm to minimize bandwidth decrease with the addition of the bending filters. The bending filters provided here have been demonstrated to provide a stable first stage control system in both the frequency domain and the MSFC MAVERIC time domain simulation.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
Energy aware swarm optimization with intercluster search for wireless sensor network.
Thilagavathi, Shanmugasundaram; Geetha, Bhavani Gnanasambandan
2015-01-01
Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.
Combes, Alain; Brodie, Daniel; Bartlett, Robert; Brochard, Laurent; Brower, Roy; Conrad, Steve; De Backer, Daniel; Fan, Eddy; Ferguson, Niall; Fortenberry, James; Fraser, John; Gattinoni, Luciano; Lynch, William; MacLaren, Graeme; Mercat, Alain; Mueller, Thomas; Ogino, Mark; Peek, Giles; Pellegrino, Vince; Pesenti, Antonio; Ranieri, Marco; Slutsky, Arthur; Vuylsteke, Alain
2014-09-01
The use of extracorporeal membrane oxygenation (ECMO) for severe acute respiratory failure (ARF) in adults is growing rapidly given recent advances in technology, even though there is controversy regarding the evidence justifying its use. Because ECMO is a complex, high-risk, and costly modality, at present it should be conducted in centers with sufficient experience, volume, and expertise to ensure it is used safely. This position paper represents the consensus opinion of an international group of physicians and associated health-care workers who have expertise in therapeutic modalities used in the treatment of patients with severe ARF, with a focus on ECMO. The aim of this paper is to provide physicians, ECMO center directors and coordinators, hospital directors, health-care organizations, and regional, national, and international policy makers a description of the optimal approach to organizing ECMO programs for ARF in adult patients. Importantly, this will help ensure that ECMO is delivered safely and proficiently, such that future observational and randomized clinical trials assessing this technique may be performed by experienced centers under homogeneous and optimal conditions. Given the need for further evidence, we encourage restraint in the widespread use of ECMO until we have a better appreciation for both the potential clinical applications and the optimal techniques for performing ECMO.
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
ACT Payload Shroud Structural Concept Analysis and Optimization
NASA Technical Reports Server (NTRS)
Zalewski, Bart B.; Bednarcyk, Brett A.
2010-01-01
Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.
[Rapid detection of caffeine in blood by freeze-out extraction].
Bekhterev, V N; Gavrilova, S N; Kozina, E P; Maslakov, I V
2010-01-01
A new method for the detection of caffeine in blood has been proposed based on the combination of extraction and freezing-out to eliminate the influence of sample matrix. Metrological characteristics of the method are presented. Selectivity of detection is achieved by optimal conditions of analysis by high performance liquid chromatography. The method is technically simple and cost-efficient, it ensures rapid performance of the studies.
ERIC Educational Resources Information Center
Dysart, Joe
2008-01-01
Given Google's growing market share--69% of all searches by the close of 2007--it's absolutely critical for any school on the Web to ensure its site is Google-friendly. A Google-optimized site ensures that students and parents can quickly find one's district on the Web even if they don't know the address. Plus, good search optimization simply…
Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
NASA Astrophysics Data System (ADS)
Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu
2017-10-01
Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.
NASA Astrophysics Data System (ADS)
Wang, Ping; Wu, Guangqiang
2013-03-01
Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.
Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan
2015-01-01
Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed.
2015-01-01
Background Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. Methods In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Results Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. Conclusions The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed. PMID:26356007
High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models
NASA Technical Reports Server (NTRS)
Manning, Valerie Michelle
1999-01-01
The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Knudson, Matthew D.; Colby, Mitchell; Tumer, Kagan
2014-01-01
Dynamic flight environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal flight paths. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.
MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm
Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.
2014-01-01
The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339
Topology optimization of 3D shell structures with porous infill
NASA Astrophysics Data System (ADS)
Clausen, Anders; Andreassen, Erik; Sigmund, Ole
2017-08-01
This paper presents a 3D topology optimization approach for designing shell structures with a porous or void interior. It is shown that the resulting structures are significantly more robust towards load perturbations than completely solid structures optimized under the same conditions. The study indicates that the potential benefit of using porous structures is higher for lower total volume fractions. Compared to earlier work dealing with 2D topology optimization, we found several new effects in 3D problems. Most notably, the opportunity for designing closed shells significantly improves the performance of porous structures due to the sandwich effect. Furthermore, the paper introduces improved filter boundary conditions to ensure a completely uniform coating thickness at the design domain boundary.
The role of physiology in the development of golf performance.
Smith, Mark F
2010-08-01
The attainment of consistent high performance in golf requires effective physical conditioning that is carefully designed and monitored in accordance with the on-course demands the player will encounter. Appreciating the role that physiology plays in the attainment of consistent performance, and how a player's physicality can inhibit performance progression, supports the notion that the application of physiology is fundamental for any player wishing to excel in golf. With cardiorespiratory, metabolic, hormonal, musculoskeletal and nutritional demands acting on the golfer within and between rounds, effective physical screening of a player will ensure physiological and anatomical deficiencies that may influence performance are highlighted. The application of appropriate golf-specific assessment methods will ensure that physical attributes that have a direct effect on golf performance can be measured reliably and accurately. With the physical development of golf performance being achieved through a process of conditioning with the purpose of inducing changes in structural and metabolic functions, training must focus on foundation whole-body fitness and golf-specific functional strength and flexibility activities. For long-term player improvement to be effective, comprehensive monitoring will ensure the player reaches an optimal physical state at predetermined times in the competitive season. Through continual assessment of a player's physical attributes, training effectiveness and suitability, and the associated adaptive responses, key physical factors that may impact most on performance success can be determined.
Approximation algorithms for scheduling unrelated parallel machines with release dates
NASA Astrophysics Data System (ADS)
Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.
2017-01-01
In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.
NASA Technical Reports Server (NTRS)
Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1990-01-01
A detailed comparison of a boost converter, a voltage-fed, autotransformer converter, and a multimodule boost converter, designed specifically for the space platform battery discharger, is performed. Computer-based nonlinear optimization techniques are used to facilitate an objective comparison. The multimodule boost converter is shown to be the optimum topology at all efficiencies. The margin is greatest at 97 percent efficiency. The multimodule, multiphase boost converter combines the advantages of high efficiency, light weight, and ample margin on the component stresses, thus ensuring high reliability.
Advanced optimal design concepts for composite material aircraft repair
NASA Astrophysics Data System (ADS)
Renaud, Guillaume
The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.
Social Media: Menagerie of Metrics
2010-01-27
intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm . An EA...Cloning - 22 Animals were cloned to date; genetic algorithms can help prediction (e.g. “elitism” - attempts to ensure selection by including performers...28, 2010 Evolutionary Algorithm • Evolutionary algorithm From Wikipedia, the free encyclopedia Artificial intelligence portal In artificial
Can a book of charts catalyze improvements in quality? Views of a healthcare alchemist.
Watson, Diane E
2012-01-01
This commentary reviews international evidence about the impact of public reporting on better care and outcomes, outlines conditions under which publicly available performance information can become a potent catalyst to precipitate improvements in quality and the optimal conditions in healthcare systems to ensure that such a catalyst results in a desirable reaction.
A Multidisciplinary Performance Analysis of a Lifting-Body Single-Stage-to-Orbit Vehicle
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Lepsch, Roger A.; Korte, J. J.; Wurster, Kathryn E.
2000-01-01
Lockheed Martin Skunk Works (LMSW) is currently developing a single-stage-to-orbit reusable launch vehicle called VentureStar(TM) A team at NASA Langley Research Center participated with LMSW in the screening and evaluation of a number of early VentureStar(TM) configurations. The performance analyses that supported these initial studies were conducted to assess the effect of a lifting body shape, linear aerospike engine and metallic thermal protection system (TPS) on the weight and performance of the vehicle. These performance studies were performed in a multidisciplinary fashion that indirectly linked the trajectory optimization with weight estimation and aerothermal analysis tools. This approach was necessary to develop optimized ascent and entry trajectories that met all vehicle design constraints. Significant improvements in ascent performance were achieved when the vehicle flew a lifting trajectory and varied the engine mixture ratio during flight. Also, a considerable reduction in empty weight was possible by adjusting the total oxidizer-to-fuel and liftoff thrust-to-weight ratios. However, the optimal ascent flight profile had to be altered to ensure that the vehicle could be trimmed in pitch using only the flow diverting capability of the aerospike engine. Likewise, the optimal entry trajectory had to be tailored to meet TPS heating rate and transition constraints while satisfying a crossrange requirement.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
The use of general anesthesia to facilitate dental treatment in adult patients with special needs.
Lim, Mathew Albert Wei Ting; Borromeo, Gelsomina Lucia
2017-06-01
General anesthesia is commonly used to facilitate dental treatment in patients with anxiety or challenging behavior, many of whom are children or patients with special needs. When performing procedures under general anesthesia, dental surgeons must perform a thorough pre-operative assessment, as well as ensure that the patients are aware of the potential risks and that informed consent has been obtained. Such precautions ensure optimal patient management and reduce the frequency of morbidities associated with this form of sedation. Most guidelines address the management of pediatric patients under general anesthesia. However, little has been published regarding this method in patients with special needs. This article constitutes a review of the current literature regarding management of patients with special needs under general anesthesia.
Huang, Jun; Kaul, Goldi; Cai, Chunsheng; Chatlapalli, Ramarao; Hernandez-Abad, Pedro; Ghosh, Krishnendu; Nagi, Arwinder
2009-12-01
To facilitate an in-depth process understanding, and offer opportunities for developing control strategies to ensure product quality, a combination of experimental design, optimization and multivariate techniques was integrated into the process development of a drug product. A process DOE was used to evaluate effects of the design factors on manufacturability and final product CQAs, and establish design space to ensure desired CQAs. Two types of analyses were performed to extract maximal information, DOE effect & response surface analysis and multivariate analysis (PCA and PLS). The DOE effect analysis was used to evaluate the interactions and effects of three design factors (water amount, wet massing time and lubrication time), on response variables (blend flow, compressibility and tablet dissolution). The design space was established by the combined use of DOE, optimization and multivariate analysis to ensure desired CQAs. Multivariate analysis of all variables from the DOE batches was conducted to study relationships between the variables and to evaluate the impact of material attributes/process parameters on manufacturability and final product CQAs. The integrated multivariate approach exemplifies application of QbD principles and tools to drug product and process development.
Bourgeois, Austin C; Chang, Ted T; Bradley, Yong C; Acuff, Shelley N; Pasciak, Alexander S
2014-02-01
Radioembolization with yttrium-90 ((90)Y) microspheres relies on delivery of appropriate treatment activity to ensure patient safety and optimize treatment efficacy. We report a case in which (90)Y positron emission tomography (PET)/computed tomography (CT) was performed to optimize treatment planning during a same-day, three-part treatment session. This treatment consisted of (i) an initial (90)Y infusion with a dosage determined using an empiric treatment planning model, (ii) quantitative (90)Y PET/CT imaging, and (iii) a secondary infusion with treatment planning based on quantitative imaging data with the goal of delivering a specific total tumor absorbed dose. © 2014 SIR Published by SIR All rights reserved.
Combining Simulation Tools for End-to-End Trajectory Optimization
NASA Technical Reports Server (NTRS)
Whitley, Ryan; Gutkowski, Jeffrey; Craig, Scott; Dawn, Tim; Williams, Jacobs; Stein, William B.; Litton, Daniel; Lugo, Rafael; Qu, Min
2015-01-01
Trajectory simulations with advanced optimization algorithms are invaluable tools in the process of designing spacecraft. Due to the need for complex models, simulations are often highly tailored to the needs of the particular program or mission. NASA's Orion and SLS programs are no exception. While independent analyses are valuable to assess individual spacecraft capabilities, a complete end-to-end trajectory from launch to splashdown maximizes potential performance and ensures a continuous solution. In order to obtain end-to-end capability, Orion's in-space tool (Copernicus) was made to interface directly with the SLS's ascent tool (POST2) and a new tool to optimize the full problem by operating both simulations simultaneously was born.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance Delta V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this Delta V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An low-lunar orbit example demonstrates the Delta V savings from the feasible solution to the optimal solution. The strategy s extensibility to more complex missions is discussed, as well as the limitations of its use.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
Performance Evaluation of Resource Management in Cloud Computing Environments.
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
NASA Astrophysics Data System (ADS)
Elbaz, Reouven; Torres, Lionel; Sassatelli, Gilles; Guillemin, Pierre; Bardouillet, Michel; Martinez, Albert
The bus between the System on Chip (SoC) and the external memory is one of the weakest points of computer systems: an adversary can easily probe this bus in order to read private data (data confidentiality concern) or to inject data (data integrity concern). The conventional way to protect data against such attacks and to ensure data confidentiality and integrity is to implement two dedicated engines: one performing data encryption and another data authentication. This approach, while secure, prevents parallelizability of the underlying computations. In this paper, we introduce the concept of Block-Level Added Redundancy Explicit Authentication (BL-AREA) and we describe a Parallelized Encryption and Integrity Checking Engine (PE-ICE) based on this concept. BL-AREA and PE-ICE have been designed to provide an effective solution to ensure both security services while allowing for full parallelization on processor read and write operations and optimizing the hardware resources. Compared to standard encryption which ensures only confidentiality, we show that PE-ICE additionally guarantees code and data integrity for less than 4% of run-time performance overhead.
Development, Validation and Integration of the ATLAS Trigger System Software in Run 2
NASA Astrophysics Data System (ADS)
Keyes, Robert; ATLAS Collaboration
2017-10-01
The trigger system of the ATLAS detector at the LHC is a combination of hardware, firmware, and software, associated to various sub-detectors that must seamlessly cooperate in order to select one collision of interest out of every 40,000 delivered by the LHC every millisecond. These proceedings discuss the challenges, organization and work flow of the ongoing trigger software development, validation, and deployment. The goal of this development is to ensure that the most up-to-date algorithms are used to optimize the performance of the experiment. The goal of the validation is to ensure the reliability and predictability of the software performance. Integration tests are carried out to ensure that the software deployed to the online trigger farm during data-taking run as desired. Trigger software is validated by emulating online conditions using a benchmark run and mimicking the reconstruction that occurs during normal data-taking. This exercise is computationally demanding and thus runs on the ATLAS high performance computing grid with high priority. Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. This is a multifaceted critical task that ties together many aspects of the experimental effort and thus directly influences the overall performance of the ATLAS experiment.
Optimizing immobilized enzyme performance in cell-free environments to produce liquid fuels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sanat
The overall goal of this project was to optimize enzyme performance for the production of bio-diesel fuel. Enzyme immobilization has attracted much attention as a means to increase productivity. Mesorporous silica materials have been known to be best suited for immobilizing enzymes. A major challenge is to ensure that the enzymatic activity is retained after immobilization. Two major factors which drive enzymatic deactivation are protein-surface and inter-protein interactions. Previously, we studied protein stability inside pores and how to optimize protein-surface interactions to minimize protein denaturation. In this work we studied eh effect of surface curvature and chemistry on inter-protein interactions.more » Our goal was to find suitable immobilization supports which minimize these inter-protein interactions. Our studies carried out in the frame work of Hydrophobic-Polar (HP) model showed that enzymes immobilized inside hydrophobic pores of optimal sizes are best suited to minimize these inter-protein interactions. Besides, this study is also of biological importance to understand the role of chaperonins in protein disaggregation. Both of these aspects profited immensely with collaborations with our experimental colleague, Prof. Georges Belfort (RPI), who performed the experimental analog of our theoretical works.« less
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
Optimizing DER Participation in Inertial and Primary-Frequency Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop
This paper develops an approach to enable the optimal participation of distributed energy resources (DERs) in inertial and primary-frequency response alongside conventional synchronous generators. Leveraging a reduced-order model description of frequency dynamics, DERs' synthetic inertias and droop coefficients are designed to meet time-domain performance objectives of frequency overshoot and steady-state regulation. Furthermore, an optimization-based method centered around classical economic dispatch is developed to ensure that DERs share the power injections for inertial- and primary-frequency response in proportion to their power ratings. Simulations for a modified New England test-case system composed of ten synchronous generators and six instances of the IEEEmore » 37-node test feeder with frequency-responsive DERs validate the design strategy.« less
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
NASA Astrophysics Data System (ADS)
Petit, C.; Le Louarn, M.; Fusco, T.; Madec, P.-Y.
2011-09-01
Various tomographic control solutions have been proposed during the last decades to ensure efficient or even optimal closed-loop correction to tomographic Adaptive Optics (AO) concepts such as Laser Tomographic AO (LTAO), Multi-Conjugate AO (MCAO). The optimal solution, based on Linear Quadratic Gaussian (LQG) approach, as well as suboptimal but efficient solutions such as Pseudo-Open Loop Control (POLC) require multiple Matrix Vector Multiplications (MVM). Disregarding their respective performance, these efficient control solutions thus exhibit strong increase of on-line complexity and their implementation may become difficult in demanding cases. Among them, two cases are of particular interest. First, the system Real-Time Computer architecture and implementation is derived from past or present solutions and does not support multiple MVM. This is the case of the AO Facility which RTC architecture is derived from the SPARTA platform and inherits its simple MVM architecture, which does not fit with LTAO control solutions for instance. Second, considering future systems such as Extremely Large Telescopes, the number of degrees of freedom is twenty to one hundred times bigger than present systems. In these conditions, tomographic control solutions can hardly be used in their standard form and optimized implementation shall be considered. Single MVM tomographic control solutions represent a potential solution, and straightforward solutions such as Virtual Deformable Mirrors have been already proposed for LTAO but with tuning issues. We investigate in this paper the possibility to derive from tomographic control solutions, such as POLC or LQG, simplified control solutions ensuring simple MVM architecture and that could be thus implemented on nowadays systems or future complex systems. We theoretically derive various solutions and analyze their respective performance on various systems thanks to numerical simulation. We discuss the optimization of their performance and stability issues with respect to classic control solutions. We finally discuss off-line computation and implementation constraints.
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel
Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.
2017-01-01
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.
Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo
2017-12-07
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
Todd, Christopher A; Greene, Kelli M; Yu, Xuesong; Ozaki, Daniel A; Gao, Hongmei; Huang, Yunda; Wang, Maggie; Li, Gary; Brown, Ronald; Wood, Blake; D'Souza, M Patricia; Gilbert, Peter; Montefiori, David C; Sarzotti-Kelsoe, Marcella
2012-01-31
Recent advances in assay technology have led to major improvements in how HIV-1 neutralizing antibodies are measured. A luciferase reporter gene assay performed in TZM-bl (JC53bl-13) cells has been optimized and validated. Because this assay has been adopted by multiple laboratories worldwide, an external proficiency testing program was developed to ensure data equivalency across laboratories performing this neutralizing antibody assay for HIV/AIDS vaccine clinical trials. The program was optimized by conducting three independent rounds of testing, with an increased level of stringency from the first to third round. Results from the participating domestic and international laboratories improved each round as factors that contributed to inter-assay variability were identified and minimized. Key contributors to increased agreement were experience among laboratories and standardization of reagents. A statistical qualification rule was developed using a simulation procedure based on the three optimization rounds of testing, where a laboratory qualifies if at least 25 of the 30 ID50 values lie within the acceptance ranges. This ensures no more than a 20% risk that a participating laboratory fails to qualify when it should, as defined by the simulation procedure. Five experienced reference laboratories were identified and tested a series of standardized reagents to derive the acceptance ranges for pass-fail criteria. This Standardized Proficiency Testing Program is the first available for the evaluation and documentation of assay equivalency for laboratories performing HIV-1 neutralizing antibody assays and may provide guidance for the development of future proficiency testing programs for other assay platforms. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Connolly, Janis H.; Arch, M.; Elfezouaty, Eileen Schultz; Novak, Jennifer Blume; Bond, Robert L. (Technical Monitor)
1999-01-01
Design and Human Engineering (HE) processes strive to ensure that the human-machine interface is designed for optimal performance throughout the system life cycle. Each component can be tested and assessed independently to assure optimal performance, but it is not until full integration that the system and the inherent interactions between the system components can be assessed as a whole. HE processes (which are defining/app lying requirements for human interaction with missions/systems) are included in space flight activities, but also need to be included in ground activities and specifically, ground facility testbeds such as Bio-Plex. A unique aspect of the Bio-Plex Facility is the integral issue of Habitability which includes qualities of the environment that allow humans to work and live. HE is a process by which Habitability and system performance can be assessed.
Abbara, Suhny; Blanke, Philipp; Maroules, Christopher D; Cheezum, Michael; Choi, Andrew D; Han, B Kelly; Marwan, Mohamed; Naoum, Chris; Norgaard, Bjarne L; Rubinshtein, Ronen; Schoenhagen, Paul; Villines, Todd; Leipsic, Jonathon
In response to recent technological advancements in acquisition techniques as well as a growing body of evidence regarding the optimal performance of coronary computed tomography angiography (coronary CTA), the Society of Cardiovascular Computed Tomography Guidelines Committee has produced this update to its previously established 2009 "Guidelines for the Performance of Coronary CTA" (1). The purpose of this document is to provide standards meant to ensure reliable practice methods and quality outcomes based on the best available data in order to improve the diagnostic care of patients. Society of Cardiovascular Computed Tomography Guidelines for the Interpretation is published separately (2). The Society of Cardiovascular Computed Tomography Guidelines Committee ensures compliance with all existing standards for the declaration of conflict of interest by all authors and reviewers for the purpose ofclarity and transparency. Copyright © 2016 Society of Cardiovascular Computed Tomography. All rights reserved.
Grey Wolf based control for speed ripple reduction at low speed operation of PMSM drives.
Djerioui, Ali; Houari, Azeddine; Ait-Ahmed, Mourad; Benkhoris, Mohamed-Fouad; Chouder, Aissa; Machmoum, Mohamed
2018-03-01
Speed ripple at low speed-high torque operation of Permanent Magnet Synchronous Machine (PMSM) drives is considered as one of the major issues to be treated. The presented work proposes an efficient PMSM speed controller based on Grey Wolf (GW) algorithm to ensure a high-performance control for speed ripple reduction at low speed operation. The main idea of the proposed control algorithm is to propose a specific objective function in order to incorporate the advantage of fast optimization process of the GW optimizer. The role of GW optimizer is to find the optimal input controls that satisfy the speed tracking requirements. The synthesis methodology of the proposed control algorithm is detailed and the feasibility and performances of the proposed speed controller is confirmed by simulation and experimental results. The GW algorithm is a model-free controller and the parameters of its objective function are easy to be tuned. The GW controller is compared to PI one on real test bench. Then, the superiority of the first algorithm is highlighted. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-03-01
The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America
Hu, Rui; Liu, Shutian; Li, Quhao
2017-05-20
For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.
Boone, Brian A; Zenati, Mazen; Hogg, Melissa E; Steve, Jennifer; Moser, Arthur James; Bartlett, David L; Zeh, Herbert J; Zureikat, Amer H
2015-05-01
Quality assessment is an important instrument to ensure optimal surgical outcomes, particularly during the adoption of new surgical technology. The use of the robotic platform for complex pancreatic resections, such as the pancreaticoduodenectomy, requires close monitoring of outcomes during its implementation phase to ensure patient safety is maintained and the learning curve identified. To report the results of a quality analysis and learning curve during the implementation of robotic pancreaticoduodenectomy (RPD). A retrospective review of a prospectively maintained database of 200 consecutive patients who underwent RPD in a large academic center from October 3, 2008, through March 1, 2014, was evaluated for important metrics of quality. Patients were analyzed in groups of 20 to minimize demographic differences and optimize the ability to detect statistically meaningful changes in performance. Robotic pancreaticoduodenectomy. Optimization of perioperative outcome parameters. No statistical differences in mortality rates or major morbidity were noted during the study. Statistical improvements in estimated blood loss and conversions to open surgery occurred after 20 cases (600 mL vs 250 mL [P = .002] and 35.0% vs 3.3% [P < .001], respectively), incidence of pancreatic fistula after 40 cases (27.5% vs 14.4%; P = .04), and operative time after 80 cases (581 minutes vs 417 minutes [P < .001]). Complication rates, lengths of stay, and readmission rates showed continuous improvement that did not reach statistical significance. Outcomes for the last 120 cases (representing optimized metrics beyond the learning curve) included a mean operative time of 417 minutes, median estimated blood loss of 250 mL, a conversion rate of 3.3%, 90-day mortality of 3.3%, a clinically significant (grade B/C) pancreatic fistula rate of 6.9%, and a median length of stay of 9 days. Continuous assessment of quality metrics allows for safe implementation of RPD. We identified several inflexion points corresponding to optimization of performance metrics for RPD that can be used as benchmarks for surgeons who are adopting this technology.
The Nike-Black Brant V development program
NASA Technical Reports Server (NTRS)
Sevier, H.; Payne, B.; Ott, R.; Montag, W.
1976-01-01
The Nike-Black Brant V represents a combined U.S.-Canadian program to achieve a 40 percent increase in apogee performance over that of the unboosted BBV, with minimum component modification and no meaningful increase in flight environment levels. The process of achieving these objectives is described, in particular optimization of sustainer coast period and roll history, and the techniques used to ensure good stage separation. Details of the structural test program and subsequent successful vehicle proving flight are provided. Basic performance data are preented, with an indication of the further potential offered by Terrier boost.
The dynamic model of enterprise revenue management
NASA Astrophysics Data System (ADS)
Mitsel, A. A.; Kataev, M. Yu; Kozlov, S. V.; Korepanov, K. V.
2017-01-01
The article presents the dynamic model of enterprise revenue management. This model is based on the quadratic criterion and linear control law. The model is founded on multiple regression that links revenues with the financial performance of the enterprise. As a result, optimal management is obtained so as to provide the given enterprise revenue, namely, the values of financial indicators that ensure the planned profit of the organization are acquired.
Cowger, Jennifer; Romano, Matthew A; Stulak, John; Pagani, Francis D; Aaronson, Keith D
2011-03-01
This review summarizes management strategies to reduce morbidity and mortality in heart failure patients supported chronically with implantable left ventricular assist devices (LVADs). As the population of patients supported with long-term LVADs has grown, patient selection, operative technique, and patient management strategies have been refined, leading to improved outcomes. This review summarizes recent findings on LVAD candidate selection, and discusses outpatient strategies to optimize device performance and heart failure management. It also reviews important device complications that warrant close outpatient monitoring. Managing patients on chronic LVAD support requires regular patient follow-up, multidisciplinary care teams, and frequent laboratory and echocardiographic surveillance to ensure optimal outcomes.
Photovoltaic performance of the dome-shaped Fresnel-Köhler concentrator
NASA Astrophysics Data System (ADS)
Zamora, Pablo; Benítez, Pablo; Yang, Li; Miñano, Juan Carlos; Mendes-Lopes, Joao; Araki, Kenji
2012-10-01
In order to have a cost-effective CPV system, two key issues must be ensured: high concentration factor and high tolerance. The novel concentrator we are presenting, the dome-shaped Fresnel-Köhler, can widely fulfill these two and other essential issues in a CPV module. This concentrator is based on two previous successful CPV designs: the FK concentrator with a flat Fresnel lens and the dome-shaped Fresnel lens system developed by Daido Steel, resulting on a superior concentrator. The concentrator has shown outstanding simulation results, achieving an effective concentration-acceptance product (CAP) value of 0.72, and an optical efficiency of 85% on-axis (no anti-reflective coating has been used). Moreover, Köhler integration provides good irradiance uniformity on the cell surface and low spectral aberration of this irradiance. This ensures an optimal performance of the solar cell, maximizing its efficiency. Besides, the domeshaped FK shows optimal results for very compact designs, especially in the f/0.7-1.0 range. The dome-shaped Fresnel- Köhler concentrator, natural and enhanced evolution of the flat FK concentrator, is a cost-effective CPV optical design, mainly due to its high tolerances. Daido Steel advanced technique for demolding injected plastic pieces will allow for easy manufacture of the dome-shaped POE of DFK concentrator.
New adaptive method to optimize the secondary reflector of linear Fresnel collectors
Zhu, Guangdong
2017-01-16
Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less
New adaptive method to optimize the secondary reflector of linear Fresnel collectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Guangdong
Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less
Chaos minimization in DC-DC boost converter using circuit parameter optimization
NASA Astrophysics Data System (ADS)
Sudhakar, N.; Natarajan, Rajasekar; Gourav, Kumar; Padmavathi, P.
2017-11-01
DC-DC converters are prone to several types of nonlinear phenomena including bifurcation, quasi periodicity, intermittency and chaos. These undesirable effects must be controlled for periodic operation of the converter to ensure the stability. In this paper an effective solution to control of chaos in solar fed DC-DC boost converter is proposed. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The obtained results are compared with the operation of traditional boost converter. Further the obtained results with BFA optimized parameter ensures the operations of the converter are within the controllable region. To elaborate the study of bifurcation analysis with optimized and unoptimized parameters are also presented.
Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems
NASA Astrophysics Data System (ADS)
Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo
2017-07-01
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
Enhanced Performance of non-PGM Catalysts in Air Operated PEM-Fuel Cells
Barkholtz, Heather M.; Chong, Lina; Kaiser, Zachary Brian; ...
2016-10-13
Here a non-platinum group metal (non-PGM) oxygen reduction catalyst was prepared from “support-free” zeolitic imidazolate framework (ZIF) precursor and tested in the proton exchange membrane fuel cell with air as the cathode feed. The iron nitrogen and carbon composite (FeeNeC) based catalyst has high specific surface area decorated uniformly with active sites, which redefines the triple phase boundary (TPB) and requires re-optimization of the cathodic membrane electrode fabrication to ensure efficient mass and charge transports to the catalyst surface. This study reports an effort in optimizing catalytic ink formulation for the membrane electrode preparation and its impact to the fuelmore » cell performance under air. Through optimization, the fuel cell areal current density as high as 115.2 mA/cm 2 at 0.8 V or 147.6 mA/cm 2 at 0.8 V iR-free has been achieved under one bar air. We also investigated impacts on fuel cell internal impedance and the water formation.« less
Partial Storage Optimization and Load Control Strategy of Cloud Data Centers
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444
Partial storage optimization and load control strategy of cloud data centers.
Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela
2015-01-01
We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.
An optimal structure for a 34-meter millimeter-wave center-fed BWG antenna: The Cross-Box concept
NASA Technical Reports Server (NTRS)
Chuang, K. L.
1988-01-01
An approach to the design of the planned NASA/JPL 34 m elevation-over-azimuth (Az-El) antenna structure at the Venus site (DSS-13) is presented. The antenna structural configuration accommodates a large (2.44 m) beam waveguide (BWG) tube centrally routed through the reflector-alidade structure, an elevation wheel design, and an optimal structural geometry. The design encompasses a cross-box elevation wheel-reflector base substructure that preserves homology while satisfying many constraints, such as structure weight, surface tolerance, stresses, natural frequency, and various functional constraints. The functional requirements are set to ensure that microwave performance at millimeter wavelengths is adequate. The cross-box configuration was modeled, optimized, and found to satisfy all DSN HEF baseline antenna specifications. In addition, the structure design was conceptualized and analyzed with an emphasis on preserving the structure envelope and keeping modifications relative to the HEF antennas to a minimum, thus enabling the transferability of the BWG technology for future retrofitting. Good performance results were obtained.
Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Method of Optimizing the Construction of Machining, Assembly and Control Devices
NASA Astrophysics Data System (ADS)
Iordache, D. M.; Costea, A.; Niţu, E. L.; Rizea, A. D.; Babă, A.
2017-10-01
Industry dynamics, driven by economic and social requirements, must generate more interest in technological optimization, capable of ensuring a steady development of advanced technical means to equip machining processes. For these reasons, the development of tools, devices, work equipment and control, as well as the modernization of machine tools, is the certain solution to modernize production systems that require considerable time and effort. This type of approach is also related to our theoretical, experimental and industrial applications of recent years, presented in this paper, which have as main objectives the elaboration and use of mathematical models, new calculation methods, optimization algorithms, new processing and control methods, as well as some structures for the construction and configuration of technological equipment with a high level of performance and substantially reduced costs..
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie
2018-01-01
Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R
2014-05-07
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.
2014-05-01
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
Assessing the Health and Performance Risks of Carbon Dioxide Exposures
NASA Technical Reports Server (NTRS)
James, John T.; Meyers, V. E.; Alexander, D.
2010-01-01
Carbon dioxide (CO2) is an anthropogenic gas that accumulates in spacecraft to much higher levels than earth-normal levels. Controlling concentrations of this gas to acceptable levels to ensure crew health and optimal performance demands major commitment of resources. NASA has many decades of experience monitoring and controlling CO2, yet we are uncertain of the levels at which subtle performance decrements develop. There is limited evidence from ground-based studies that visual disturbances can occur during brief exposures and visual changes have been noted in spaceflight crews. These changes may be due to CO2 alone or in combination with other known spaceflight factors such as increased intracranial pressure due to fluid shifts. Discerning the comparative contribution of each to performance decrements is an urgent issue if we hope to optimize astronaut performance aboard the ISS. Long-term, we must know the appropriate control levels for exploration-class missions to ensure that crewmembers can remain cooperative and productive in a highly stressful environment. Furthermore, we must know the magnitude of interindividual variability in susceptibility to the adverse effects of CO2 so that the most tolerant crewmembers can be identified. Ground-based studies have been conducted for many years to set exposure limits for submariners; however, these studies are typically limited and incompletely reported. Nonetheless, NASA, in cooperation with the National Research Council, has set exposure limits for astronauts using this limited database. These studies do not consider the interactions of spaceflight-induced fluid shifts and CO2 exposures. In an attempt to discern whether CO2 levels affect the incidence of headache and visual disturbances in astronauts we performed a retrospective study comparing average CO2 levels and the prevalence of headache and visual disturbances. Our goal is to narrow gaps in the risk profile for in-flight CO2 exposures. Such studies can provide no more than partial answers to the questions of environmental interactions, interindividual variability, and optimal control levels. Future prospective studies should involve assessment of astronaut well being using sophisticated measures during exposures to levels of CO2 in the range from 2 to 8 mmHg.
Investigation into Improvement for Anti-Rollover Propensity of SUV
NASA Astrophysics Data System (ADS)
Xiong, Fei; Lan, Fengchong; Chen, Jiqing; Yang, Yuedong
2017-05-01
Currently, many research from domestic and foreign on improving anti-rollover performance of vehicle mainly focus on the electronic control of auxiliary equipment, do not make full use of suspension layout to optimize anti-rollover performance of vehicle. This investigation into anti-rollover propensity improvement concentrates on the vehicle parameters greatly influencing on anti-rollover propensity of vehicle. A simulation based on fishhook procedure is used to perform design trials and evaluations aimed at ensuring an optimal balance between vehicle's design parameters and various engineering capacities, the anti-rollover propensity is optimized at the detailed design stage of a new SUV model. Firstly a four-DOF theoretical kinematic model is established, then a complete multi-body dynamics model built in ADAMS/car based on the whole vehicle parameters is correlated to the objective handing and stability test results of a mule car. Secondly, in fishhook test simulations, the Design of Experiments method is used to quantify the effect of the vehicle parameters on the anti-rollover performance. By means of the simulation, the roll center height of front suspension should be more than 30 mm, that of rear suspension less than 150 mm, and the HCG less than 620 mm for the SUV. The ratio of front to rear suspension roll stiffness should be ranged from 1.4 to 1.6 for the SUV. As a result, at the detailed design stage of product, the anti-rollover performance of vehicle can be improved by optimizing chassis and integrated vehicle parameters.
An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.
Rañó, Iñaki
2012-07-01
Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.
Simulation of the human-telerobot interface on the Space Station
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1993-01-01
Many issues remain unresolved concerning the components of the human-telerobot interface presented in this work. It is critical that these components be optimally designed and arranged to ensure, not only that the overall system's goals are met, but but that the intended end-user has been optimally accommodated. With sufficient testing and evaluation throughout the development cycle, the selection of the components to use in the final telerobotic system can promote efficient, error-free performance. It is recommended that whole-system simulation with full-scale mockups be used to help design the human-telerobot interface. It is contended that the use of simulation can facilitate this design and evaluation process.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
Full space device optimization for solar cells.
Baloch, Ahmer A B; Aly, Shahzada P; Hossain, Mohammad I; El-Mellouhi, Fedwa; Tabet, Nouar; Alharbi, Fahhad H
2017-09-20
Advances in computational materials have paved a way to design efficient solar cells by identifying the optimal properties of the device layers. Conventionally, the device optimization has been governed by single or double descriptors for an individual layer; mostly the absorbing layer. However, the performance of the device depends collectively on all the properties of the material and the geometry of each layer in the cell. To address this issue of multi-property optimization and to avoid the paradigm of reoccurring materials in the solar cell field, a full space material-independent optimization approach is developed and presented in this paper. The method is employed to obtain an optimized material data set for maximum efficiency and for targeted functionality for each layer. To ensure the robustness of the method, two cases are studied; namely perovskite solar cells device optimization and cadmium-free CIGS solar cell. The implementation determines the desirable optoelectronic properties of transport mediums and contacts that can maximize the efficiency for both cases. The resulted data sets of material properties can be matched with those in materials databases or by further microscopic material design. Moreover, the presented multi-property optimization framework can be extended to design any solid-state device.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-09-01
This paper presents an event-triggered near optimal control of uncertain nonlinear discrete-time systems. Event-driven neurodynamic programming (NDP) is utilized to design the control policy. A neural network (NN)-based identifier, with event-based state and input vectors, is utilized to learn the system dynamics. An actor-critic framework is used to learn the cost function and the optimal control input. The NN weights of the identifier, the critic, and the actor NNs are tuned aperiodically once every triggered instant. An adaptive event-trigger condition to decide the trigger instants is derived. Thus, a suitable number of events are generated to ensure a desired accuracy of approximation. A near optimal performance is achieved without using value and/or policy iterations. A detailed analysis of nontrivial inter-event times with an explicit formula to show the reduction in computation is also derived. The Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the ultimate boundedness of the closed-loop system. The simulation results are included to verify the performance of the controller. The net result is the development of event-driven NDP.
NASA Astrophysics Data System (ADS)
Chase, Patrick; Vondran, Gary
2011-01-01
Tetrahedral interpolation is commonly used to implement continuous color space conversions from sparse 3D and 4D lookup tables. We investigate the implementation and optimization of tetrahedral interpolation algorithms for GPUs, and compare to the best known CPU implementations as well as to a well known GPU-based trilinear implementation. We show that a 500 NVIDIA GTX-580 GPU is 3x faster than a 1000 Intel Core i7 980X CPU for 3D interpolation, and 9x faster for 4D interpolation. Performance-relevant GPU attributes are explored including thread scheduling, local memory characteristics, global memory hierarchy, and cache behaviors. We consider existing tetrahedral interpolation algorithms and tune based on the structure and branching capabilities of current GPUs. Global memory performance is improved by reordering and expanding the lookup table to ensure optimal access behaviors. Per multiprocessor local memory is exploited to implement optimally coalesced global memory accesses, and local memory addressing is optimized to minimize bank conflicts. We explore the impacts of lookup table density upon computation and memory access costs. Also presented are CPU-based 3D and 4D interpolators, using SSE vector operations that are faster than any previously published solution.
NASA Astrophysics Data System (ADS)
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over the past years, many studies have looked at the planning and management of water infrastructure systems as two separate problems, where the dynamic component (i.e., operations) is considered only after the static problem (i.e., planning) has been resolved. Most recent works have started to investigate planning and management as two strictly interconnected faces of the same problem, where the former is solved jointly with the latter in an integrated framework. This brings advantages to multi-purpose water reservoir systems, where several optimal operating strategies exist and similar system designs might perform differently on the long term depending on the considered short-term operating tradeoff. An operationally robust design will be therefore one performing well across multiple feasible tradeoff operating policies. This work aims at studying the interaction between short-term operating strategies and their impacts on long-term structural decisions, when long-lived infrastructures with complex ecological impacts and multi-sectoral demands to satisfy (i.e., reservoirs) are considered. A parametric reinforcement learning approach is adopted for nesting optimization and control yielding to both optimal reservoir design and optimal operational policies for water reservoir systems. The method is demonstrated on a synthetic reservoir that must be designed and operated for ensuring reliable water supply to downstream users. At first, the optimal design capacity derived is compared with the 'no-fail storage' computed through Rippl, a capacity design function that returns the minimum storage needed to satisfy specified water demands without allowing supply shortfall. Then, the optimal reservoir volume is used to simulate the simplified case study under other operating objectives than water supply, in order to assess whether and how the system performance changes. The more robust the infrastructural design, the smaller the difference between the performances of different operating strategies.
Performance seeking control: Program overview and future directions
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1993-01-01
A flight test evaluation of the performance-seeking control (PSC) algorithm on the NASA F-15 highly integrated digital electronic control research aircraft was conducted for single-engine operation at subsonic and supersonic speeds. The model-based PSC system was developed with three optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, and maximum thrust at maximum dry and full afterburner throttle settings. Subsonic and supersonic flight testing were conducted at the NASA Dryden Flight Research Facility covering the three PSC optimization modes and over the full throttle range. Flight results show substantial benefits. In the maximum thrust mode, thrust increased up to 15 percent at subsonic and 10 percent at supersonic flight conditions. The minimum fan turbine inlet temperature mode reduced temperatures by more than 100 F at high altitudes. The minimum fuel flow mode results decreased fuel consumption up to 2 percent in the subsonic regime and almost 10 percent supersonically. These results demonstrate that PSC technology can benefit the next generation of fighter or transport aircraft. NASA Dryden is developing an adaptive aircraft performance technology system that is measurement based and uses feedback to ensure optimality. This program will address the technical weaknesses identified in the PSC program and will increase performance gains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farina, D.; Figini, L.; Henderson, M.
2014-06-15
The design of the ITER Electron Cyclotron Heating and Current Drive (EC H and CD) system has evolved in the last years both in goals and functionalities by considering an expanded range of applications. A large effort has been devoted to a better integration of the equatorial and the upper launchers, both from the point of view of the performance and of the design impact on the engineering constraints. However, from the analysis of the ECCD performance in two references H-mode scenarios at burn (the inductive H-mode and the advanced non-inductive scenario), it was clear that the EC power depositionmore » was not optimal for steady-state applications in the plasma region around mid radius. An optimization study of the equatorial launcher is presented here aiming at removing this limitation of the EC system capabilities. Changing the steering of the equatorial launcher from toroidal to poloidal ensures EC power deposition out to the normalized toroidal radius ρ ≈ 0.6, and nearly doubles the EC driven current around mid radius, without significant performance degradation in the core plasma region. In addition to the improved performance, the proposed design change is able to relax some engineering design constraints on both launchers.« less
Fiscal Year 2013 Net Zero Energy-Water-Waste Portfolio for Fort Leonard Wood
2014-12-01
rain sensor /evapotran- spiration central control system. Witnesses said they have seen the system ERDC/CERL SR-14-11 77 in use during rains so it...is possible the system settings and sensors need to be reassessed. Building 6100 The building is an administrative trainee company headquarters...CERL SR-14-11 108 ensure that each watering event is optimally performed during the day. A centrally controlled system with rain sensors should also
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe
NASA Astrophysics Data System (ADS)
Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao
2017-10-01
Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.
NASA Astrophysics Data System (ADS)
Chen, Dongju; Li, Dandan; Li, Xianfeng
2017-06-01
A hierarchical poly (ether sulfone) (PES) porous membrane is facilely fabricated via a hard template method for vanadium flow battery (VFB) application. The construction of this hierarchical porous membrane is prepared via removing templates (phenolphthalein). The pore size can be well controlled by optimizing the template content in the cast solution, ensuring the membrane conductivity and selectively. The prepared hierarchical porous membrane can combine high ion selectivity with high proton conductivity, which renders a good electrochemical performance in a VFB. The optimized hierarchical porous membrane shows a columbic efficiency of 94.52% and energy efficiency of 81.66% along with a superior ability to maintain stable capacity over extended cycling at a current density of 80 mA cm-2. The characteristics of low cost, proven chemical stability and high electrochemical performance afford the hierarchical PES porous membrane great prospect in VFB application.
NASA Astrophysics Data System (ADS)
Rajesh, Chelakkal Sukumaran; Sreeroop, Sasidharan Savithrydevi; Pramitha, Vayalamkuzhi; Joseph, Rani; Sreekumar, Krishnapillai; Kartha, Cheranellore Sudha
2011-12-01
This article reports a study done on eosin-doped poly(vinyl alcohol)/acrylamide films for holographic recording using 488 nm Ar+ laser. Films were fabricated using gravity settling method at room temperature and were stored under normal laboratory conditions. Ar+ laser (488 nm) was used for fringe recording. Characterization was done by real time transmittance measurement, optical absorption studies, and diffraction efficiency measurements. Various holographic parameters such as exposure energy, recording power, spatial frequency, etc., were optimized so as to ensure maximum performance. More than 85% diffraction efficiency was obtained at an exposure energy of 50 mJ/cm2 in the optimized film. Efforts were taken to study the environmental stability of this self-developing polymeric material by looking at its shelf life and storage life. Compatibility for recording transmission hologram was also checked.
NASA Astrophysics Data System (ADS)
Pakpahan, Eka K. A.; Iskandar, Bermawi P.
2015-12-01
Mining industry is characterized by a high operational revenue, and hence high availability of heavy equipment used in mining industry is a critical factor to ensure the revenue target. To maintain high avaliability of the heavy equipment, the equipment's owner hires an agent to perform maintenance action. Contract is then used to control the relationship between the two parties involved. The traditional contracts such as fixed price, cost plus or penalty based contract studied is unable to push agent's performance to exceed target, and this in turn would lead to a sub-optimal result (revenue). This research deals with designing maintenance contract compensation schemes. The scheme should induce agent to select the highest possible maintenance effort level, thereby pushing agent's performance and achieve maximum utility for both parties involved. Principal agent theory is used as a modeling approach due to its ability to simultaneously modeled owner and agent decision making process. Compensation schemes considered in this research includes fixed price, cost sharing and revenue sharing. The optimal decision is obtained using a numerical method. The results show that if both parties are risk neutral, then there are infinite combination of fixed price, cost sharing and revenue sharing produced the same optimal solution. The combination of fixed price and cost sharing contract results in the optimal solution when the agent is risk averse, while the optimal combination of fixed price and revenue sharing contract is obtained when agent is risk averse. When both parties are risk averse, the optimal compensation scheme is a combination of fixed price, cost sharing and revenue sharing.
Trigeminal neuralgia--a coherent cross-specialty management program.
Heinskou, Tone; Maarbjerg, Stine; Rochat, Per; Wolfram, Frauke; Jensen, Rigmor Højland; Bendtsen, Lars
2015-01-01
Optimal management of patients with classical trigeminal neuralgia (TN) requires specific treatment programs and close collaboration between medical, radiological and surgical specialties. Organization of such treatment programs has never been described before. With this paper we aim to describe the implementation and feasibility of an accelerated cross-speciality management program, to describe the collaboration between the involved specialties and to report the patient flow during the first 2 years after implementation. Finally, we aim to stimulate discussions about optimal management of TN. Based on collaboration between neurologists, neuroradiologists and neurosurgeons a standardized program for TN was implemented in May 2012 at the Danish Headache Center (DHC). First out-patient visit and subsequent 3.0 Tesla MRI scan was booked in an accelerated manner. The MRI scan was performed according to a special TN protocol developed for this program. Patients initially referred to neurosurgery were re-directed to DHC for pre-surgical evaluation of diagnosis and optimization of medical treatment. Follow-up was 2 years with fixed visits where medical treatment and indication for neurosurgery was continuously evaluated. Scientific data was collected in a structured and prospective manner. From May 2012 to April 2014, 130 patients entered the accelerated program. Waiting time for the first out-patient visit was 42 days. Ninety-four percent of the patients had a MRI performed according to the special protocol after a mean of 37 days. Within 2 years follow-up 35% of the patients were referred to neurosurgery after a median time of 65 days. Five scientific papers describing demographics, clinical characteristics and neuroanatomical abnormalities were published. The described cross-speciality management program proved to be feasible and to have acceptable waiting times for referral and highly specialized work-up of TN patients in a public tertiary referral centre for headache and facial pain. Early high quality MRI ensured correct diagnosis and that the neurosurgeons had a standardized basis before decision-making on impending surgery. The program ensured that referral of the subgroup of patients in need for surgery was standardized, ensured continuous evaluation of the need for adjustments in pharmacological management and formed the basis for scientific research.
Fast 2D FWI on a multi and many-cores workstation.
NASA Astrophysics Data System (ADS)
Thierry, Philippe; Donno, Daniela; Noble, Mark
2014-05-01
Following the introduction of x86 co-processors (Xeon Phi) and the performance increase of standard 2-socket workstations using the latest 12 cores E5-v2 x86-64 CPU, we present here a MPI + OpenMP implementation of an acoustic 2D FWI (full waveform inversion) code which simultaneously runs on the CPUs and on the co-processors installed in a workstation. The main advantage of running a 2D FWI on a workstation is to be able to quickly evaluate new features such as more complicated wave equations, new cost functions, finite-difference stencils or boundary conditions. Since the co-processor is made of 61 in-order x86 cores, each of them having up to 4 threads, this many-core can be seen as a shared memory SMP (symmetric multiprocessing) machine with its own IP address. Depending on the vendor, a single workstation can handle several co-processors making the workstation as a personal cluster under the desk. The original Fortran 90 CPU version of the 2D FWI code is just recompiled to get a Xeon Phi x86 binary. This multi and many-core configuration uses standard compilers and associated MPI as well as math libraries under Linux; therefore, the cost of code development remains constant, while improving computation time. We choose to implement the code with the so-called symmetric mode to fully use the capacity of the workstation, but we also evaluate the scalability of the code in native mode (i.e running only on the co-processor) thanks to the Linux ssh and NFS capabilities. Usual care of optimization and SIMD vectorization is used to ensure optimal performances, and to analyze the application performances and bottlenecks on both platforms. The 2D FWI implementation uses finite-difference time-domain forward modeling and a quasi-Newton (with L-BFGS algorithm) optimization scheme for the model parameters update. Parallelization is achieved through standard MPI shot gathers distribution and OpenMP for domain decomposition within the co-processor. Taking advantage of the 16 GB of memory available on the co-processor we are able to keep wavefields in memory to achieve the gradient computation by cross-correlation of forward and back-propagated wavefields needed by our time-domain FWI scheme, without heavy traffic on the i/o subsystem and PCIe bus. In this presentation we will also review some simple methodologies to determine performance expectation compared to real performances in order to get optimization effort estimation before starting any huge modification or rewriting of research codes. The key message is the ease of use and development of this hybrid configuration to reach not the absolute peak performance value but the optimal one that ensures the best balance between geophysical and computer developments.
2017-01-01
This paper presents a method for formation flight and collision avoidance of multiple UAVs. Due to the shortcomings such as collision avoidance caused by UAV’s high-speed and unstructured environments, this paper proposes a modified tentacle algorithm to ensure the high performance of collision avoidance. Different from the conventional tentacle algorithm which uses inverse derivation, the modified tentacle algorithm rapidly matches the radius of each tentacle and the steering command, ensuring that the data calculation problem in the conventional tentacle algorithm is solved. Meanwhile, both the speed sets and tentacles in one speed set are reduced and reconstructed so as to be applied to multiple UAVs. Instead of path iterative optimization, the paper selects the best tentacle to obtain the UAV collision avoidance path quickly. The simulation results show that the method presented in the paper effectively enhances the performance of flight formation and collision avoidance for multiple high-speed UAVs in unstructured environments. PMID:28763498
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-01-01
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links. PMID:28282899
Optimum design and measurement analysis of 0.34 THz extended interaction klystron
NASA Astrophysics Data System (ADS)
Li, Shuang; Wang, Jianguo; Xi, Hongzhu; Wang, Dongyang; Wang, Bingbing; Wang, Guangqiang; Teng, Yan
2018-02-01
In order to develop an extended interaction klystron (EIK) with high performance in the terahertz range, the staggered-tuned structure is numerically studied, manufactured, and measured. First, the circuit is optimized to get high interaction strength and avoid the mode overlapping in the output cavity, ensuring the efficiency and stability for the device. Then the clustered cavities are staggered tuned to improve its bandwidth. The particle-in-cell (PIC) code is employed to research the performances of the device under different conditions and accordingly the practicable and reliable conditions are confirmed. The device can effectively amplify the input terahertz signal and its gain reaches around 19.6 dB when the working current is 150 mA. The circuit and window are fabricated and tested, whose results demonstrate their usability. The experiment on the beam's transmission is conducted and the results show that about 92% of the emitting current can successfully arrive at the collector, ensuring the validity and feasibility for the interaction process.
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network.
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-03-08
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links.
Robust, Optimal Water Infrastructure Planning Under Deep Uncertainty Using Metamodels
NASA Astrophysics Data System (ADS)
Maier, H. R.; Beh, E. H. Y.; Zheng, F.; Dandy, G. C.; Kapelan, Z.
2015-12-01
Optimal long-term planning plays an important role in many water infrastructure problems. However, this task is complicated by deep uncertainty about future conditions, such as the impact of population dynamics and climate change. One way to deal with this uncertainty is by means of robustness, which aims to ensure that water infrastructure performs adequately under a range of plausible future conditions. However, as robustness calculations require computationally expensive system models to be run for a large number of scenarios, it is generally computationally intractable to include robustness as an objective in the development of optimal long-term infrastructure plans. In order to overcome this shortcoming, an approach is developed that uses metamodels instead of computationally expensive simulation models in robustness calculations. The approach is demonstrated for the optimal sequencing of water supply augmentation options for the southern portion of the water supply for Adelaide, South Australia. A 100-year planning horizon is subdivided into ten equal decision stages for the purpose of sequencing various water supply augmentation options, including desalination, stormwater harvesting and household rainwater tanks. The objectives include the minimization of average present value of supply augmentation costs, the minimization of average present value of greenhouse gas emissions and the maximization of supply robustness. The uncertain variables are rainfall, per capita water consumption and population. Decision variables are the implementation stages of the different water supply augmentation options. Artificial neural networks are used as metamodels to enable all objectives to be calculated in a computationally efficient manner at each of the decision stages. The results illustrate the importance of identifying optimal staged solutions to ensure robustness and sustainability of water supply into an uncertain long-term future.
Lithium-Ion Batteries for Aerospace Applications
NASA Technical Reports Server (NTRS)
Surampudi, S.; Halpert, G.; Marsh, R. A.; James, R.
1999-01-01
This presentation reviews: (1) the goals and objectives, (2) the NASA and Airforce requirements, (3) the potential near term missions, (4) management approach, (5) the technical approach and (6) the program road map. The objectives of the program include: (1) develop high specific energy and long life lithium ion cells and smart batteries for aerospace and defense applications, (2) establish domestic production sources, and to demonstrate technological readiness for various missions. The management approach is to encourage the teaming of universities, R&D organizations, and battery manufacturing companies, to build on existing commercial and government technology, and to develop two sources for manufacturing cells and batteries. The technological approach includes: (1) develop advanced electrode materials and electrolytes to achieve improved low temperature performance and long cycle life, (2) optimize cell design to improve specific energy, cycle life and safety, (3) establish manufacturing processes to ensure predictable performance, (4) establish manufacturing processes to ensure predictable performance, (5) develop aerospace lithium ion cells in various AH sizes and voltages, (6) develop electronics for smart battery management, (7) develop a performance database required for various applications, and (8) demonstrate technology readiness for the various missions. Charts which review the requirements for the Li-ion battery development program are presented.
Han, Min; Fan, Jianchao; Wang, Jun
2011-09-01
A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.
Ares-I Bending Filter Design using a Constrained Optimization Approach
NASA Technical Reports Server (NTRS)
Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth
2008-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Testing and Optimizing a Stove-Powered Thermoelectric Generator with Fan Cooling.
Zheng, Youqu; Hu, Jiangen; Li, Guoneng; Zhu, Lingyun; Guo, Wenwen
2018-06-07
In order to provide heat and electricity under emergency conditions in off-grid areas, a stove-powered thermoelectric generator (STEG) was designed and optimized. No battery was incorporated, ensuring it would work anytime, anywhere, as long as combustible materials were provided. The startup performance, power load feature and thermoelectric (TE) efficiency were investigated in detail. Furthermore, the heat-conducting plate thickness, cooling fan selection, heat sink dimension and TE module configuration were optimized. The heat flow method was employed to determine the TE efficiency, which was compared to the predicted data. Results showed that the STEG can supply clean-and-warm air (625 W) and electricity (8.25 W at 5 V) continuously at a temperature difference of 148 °C, and the corresponding TE efficiency was measured to be 2.31%. Optimization showed that the choice of heat-conducting plate thickness, heat sink dimensions and cooling fan were inter-dependent, and the TE module configuration affected both the startup process and the power output.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
Aziz, Najib; Margolick, Joseph B; Detels, Roger; Rinaldo, Charles R; Phair, John; Jamieson, Beth D; Butch, Anthony W
2013-04-01
Cryopreservation of peripheral blood mononuclear cells (PBMC) allows assays of cellular function and phenotype to be performed in batches at a later time on PBMC at a central laboratory to minimize assay variability. The Multicenter AIDS Cohort Study (MACS) is an ongoing prospective study of the natural and treated history of human immunodeficiency virus (HIV) infection that stores cryopreserved PBMC from participants two times a year at four study sites. In order to ensure consistent recovery of viable PBMC after cryopreservation, a quality assessment program was implemented and conducted in the MACS over a 6-year period. Every 4 months, recently cryopreserved PBMC from HIV-1-infected and HIV-1-uninfected participants at each MACS site were thawed and evaluated. The median recoveries of viable PBMC for HIV-1-infected and -uninfected participants were 80% and 83%, respectively. Thawed PBMC from both HIV-1-infected and -uninfected participants mounted a strong proliferative response to phytohemagglutinin, with median stimulation indices of 84 and 120, respectively. Expression of the lymphocyte surface markers CD3, CD4, and CD8 by thawed PBMC was virtually identical to what was observed on cells measured in real time using whole blood from the same participants. Furthermore, despite overall excellent performance of the four participating laboratories, problems were identified that intermittently compromised the quality of cryopreserved PBMC, which could be corrected and monitored for improvement over time. Ongoing quality assessment helps laboratories improve protocols and performance on a real-time basis to ensure optimal cryopreservation of PBMC for future studies.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
AS Migration and Optimization of the Power Integrated Data Network
NASA Astrophysics Data System (ADS)
Zhou, Junjie; Ke, Yue
2018-03-01
In the transformation process of data integration network, the impact on the business has always been the most important reference factor to measure the quality of network transformation. With the importance of the data network carrying business, we must put forward specific design proposals during the transformation, and conduct a large number of demonstration and practice to ensure that the transformation program meets the requirements of the enterprise data network. This paper mainly demonstrates the scheme of over-migrating point-to-point access equipment in the reconstruction project of power data comprehensive network to migrate the BGP autonomous domain to the specified domain defined in the industrial standard, and to smooth the intranet OSPF protocol Migration into ISIS agreement. Through the optimization design, eventually making electric power data network performance was improved on traffic forwarding, traffic forwarding path optimized, extensibility, get larger, lower risk of potential loop, the network stability was improved, and operational cost savings, etc.
Coordinated distribution network control of tap changer transformers, capacitors and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceylan, Oğuzhan; Liu, Guodong; Tomsovic, Kevin
A power distribution system operates most efficiently with voltage deviations along a feeder kept to a minimum and must ensure all voltages remain within specified limits. Recently with the increased integration of photovoltaics, the variable power output has led to increased voltage fluctuations and violation of operating limits. This study proposes an optimization model based on a recently developed heuristic search method, grey wolf optimization, to coordinate the various distribution controllers. Several different case studies on IEEE 33 and 69 bus test systems modified by including tap changing transformers, capacitors and photovoltaic solar panels are performed. Simulation results are comparedmore » to two other heuristic-based optimization methods: harmony search and differential evolution. Finally, the simulation results show the effectiveness of the method and indicate the usage of reactive power outputs of PVs facilitates better voltage magnitude profile.« less
Coordinated distribution network control of tap changer transformers, capacitors and PV inverters
Ceylan, Oğuzhan; Liu, Guodong; Tomsovic, Kevin
2017-06-08
A power distribution system operates most efficiently with voltage deviations along a feeder kept to a minimum and must ensure all voltages remain within specified limits. Recently with the increased integration of photovoltaics, the variable power output has led to increased voltage fluctuations and violation of operating limits. This study proposes an optimization model based on a recently developed heuristic search method, grey wolf optimization, to coordinate the various distribution controllers. Several different case studies on IEEE 33 and 69 bus test systems modified by including tap changing transformers, capacitors and photovoltaic solar panels are performed. Simulation results are comparedmore » to two other heuristic-based optimization methods: harmony search and differential evolution. Finally, the simulation results show the effectiveness of the method and indicate the usage of reactive power outputs of PVs facilitates better voltage magnitude profile.« less
Chocolate milk: a post-exercise recovery beverage for endurance sports.
Pritchett, Kelly; Pritchett, Robert
2012-01-01
An optimal post-exercise nutrition regimen is fundamental for ensuring recovery. Therefore, research has aimed to examine post-exercise nutritional strategies for enhanced training stimuli. Chocolate milk has become an affordable recovery beverage for many athletes, taking the place of more expensive commercially available recovery beverages. Low-fat chocolate milk consists of a 4:1 carbohydrate:protein ratio (similar to many commercial recovery beverages) and provides fluids and sodium to aid in post-workout recovery. Consuming chocolate milk (1.0-1.5•g•kg(-1) h(-1)) immediately after exercise and again at 2 h post-exercise appears to be optimal for exercise recovery and may attenuate indices of muscle damage. Future research should examine the optimal amount, timing, and frequency of ingestion of chocolate milk on post-exercise recovery measures including performance, indices of muscle damage, and muscle glycogen resynthesis. Copyright © 2012 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Liang, Ya-Wei; Zhang, Hong-Mei; Dong, Jin-Zhi; Shi, Zhen-Hua
2016-05-01
Building Integrated Photovoltaic (BIPV) is a resort to save energy and reduce heat gain of buildings, utilize new and renewable energy, solve environment problems and alleviate electricity shortage in large cities. The area needed to generate power makes facade integrated photovoltaic panel a superb choice, especially in high-rise buildings. Numerous scholars have hitherto explored Building Facade Integrated Photovoltaic, however, focusing mainly on thermal performance, which fails to ensure seismic safety of high-rise buildings integrated photovoltaic. Based on connecting forms of the glass curtain wall, a connector jointing photovoltaic panel and facade was designed, which underwent loading position and size optimization. Static loading scenarios were conducted to test and verify the connector's mechanical properties under gravity and wind loading by means of HyperWorks. Compared to the unoptimized design, the optimized one saved material and managed to reduce maximum deflection by 74.64%.
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
NASA Astrophysics Data System (ADS)
Jia, Wen-bin; Xiao, Fu-hai
2013-03-01
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Software for Optimizing Quality Assurance of Other Software
NASA Technical Reports Server (NTRS)
Feather, Martin; Cornford, Steven; Menzies, Tim
2004-01-01
Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-11-16
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-01-01
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342
Clinical performance of a prototype flat-panel digital detector for general radiography
NASA Astrophysics Data System (ADS)
Huda, Walter; Scalzetti, Ernest M.; Roskopf, Marsha L.; Geiger, Robert
2001-08-01
Digital radiographs obtained using a prototype Digital Radiography System (Stingray) were compared with those obtained using conventional screen-film. Forty adult volunteers each had two identical radiographs taken at the same level of radiation exposure, one using screen-film and the other the digital detector. Each digital image was processed by hand to ensure that the printed quality was optimal. Ten radiologists compared the diagnostic image quality of the digital images with the corresponding film radiographs using a seven point ranking scheme.
Manufacture of conical springs with elastic medium technology improvement
NASA Astrophysics Data System (ADS)
Kurguzov, S. A.; Mikhailova, U. V.; Kalugina, O. B.
2018-01-01
This article considers the manufacturing technology improvement by using an elastic medium in the stamping tool forming space to improve the conical springs performance characteristics and reduce the costs of their production. Estimation technique of disk spring operational properties is developed by mathematical modeling of the compression process during the operation of a spring. A technique for optimizing the design parameters of a conical spring is developed, which ensures a minimum voltage value when operated in the edge of the spring opening.
The evolving potential of companion diagnostics.
Khoury, Joseph D
2016-01-01
The scope of companion diagnostics in cancer has undergone significant shifts in the past few years, with increased development of targeted therapies and novel testing platforms. This has provided new opportunities to effect unprecedented paradigm shifts in the application of personalized medicine principles for patients with cancer. These shifts involve assay platforms, analytes, regulations, and therapeutic approaches. As opportunities involving each of these facets of companion diagnostics expand, close collaborations between key stakeholders should be enhanced to ensure optimal performance characteristics and patient outcomes.
NASA Astrophysics Data System (ADS)
Zhang, J. Y.; Jiang, Y.
2017-10-01
To ensure satisfactory dynamic performance of controllers in time-delayed power systems, a WAMS-based control strategy is investigated in the presence of output feedback delay. An integrated approach based on Pade approximation and particle swarm optimization (PSO) is employed for parameter configuration of PSS. The coordination configuration scheme of power system controllers is achieved by a series of stability constraints at the aim of maximizing the minimum damping ratio of inter-area mode of power system. The validity of this derived PSS is verified on a prototype power system. The findings demonstrate that the proposed approach for control design could damp the inter-area oscillation and enhance the small-signal stability.
Design optimization studies for nonimaging concentrating solar collector tubes
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J. J.
1983-09-01
The Integrated Stationary Evacuated Concentrator or ISEC solar collector panel which achieved the best high temperature performance ever measured with a stationary collector was examined. A development effort review and optimize the initial proof of concept design was completed. Changes in the optical design to improve the angular response function and increase the optical efficiency were determined. A recommended profile design with a concentration ratio of 1.55x and an acceptance angle of + - 35(0) was identified. Two alternative panel/module configurations are recommended based on the preferred double ended flow through design. Parasitic thermal and pumping losses show to be reducible to acceptable levels, and two passive approaches to the problem of ensuring stagnation survival are identified.
Mache, Stefanie; Vitzthum, Karin; Wanke, Eileen; Klapp, Burghard F; Danzer, Gerhard
2014-01-01
The German health care system has undergone radical changes in the last decades. These days health care professionals have to face economic demands, high performance pressure as well as high expectations from patients. To ensure high quality medicine and care, highly intrinsic motivated and work engaged health care professionals are strongly needed. The aim of this study was to examine relations between personal and organizational resources as essential predictors for work engagement of German health care professionals. This investigation has a cross-sectional questionnaire study design. Participants were a sample of hospital doctors. Personal strengths, working conditions and work engagement were measured by using the SWOPE-K9, COPE Brief Questionnaire, Perceived Stress Questionnaire, COPSOQ and Utrecht Work Engagement Scale. Significant relations between physicians' personal strengths (e.g. resilience, optimism) and work engagement were evaluated. Work related factors showed to have a significant influence on work engagement. Differences in work engagement were also found with regard to socio-demographic variables. Results demonstrated important relationships between personal and organizational resources and work engagement. Health care management needs to use this information to maintain or develop work engaging job conditions in hospitals as one key factor to ensure quality health care service.
Optimization of Blended Wing Body Composite Panels Using Both NASTRAN and Genetic Algorithm
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2006-01-01
The blended wing body (BWB) is a concept that has been investigated for improving the performance of transport aircraft. A trade study was conducted by evaluating four regions from a BWB design characterized by three fuselage bays and a 400,000 lb. gross take-off weight (GTW). This report describes the structural optimization of these regions via computational analysis and compares them to the baseline designs of the same construction. The identified regions were simplified for use in the optimization. The regions were represented by flat panels having appropriate classical boundary conditions and uniform force resultants along the panel edges. Panel-edge tractions and internal pressure values applied during the study were those determined by nonlinear NASTRAN analyses. Only one load case was considered in the optimization analysis for each panel region. Optimization was accomplished using both NASTRAN solution 200 and Genetic Algorithm (GA), with constraints imposed on stress, buckling, and minimum thicknesses. The NASTRAN optimization analyses often resulted in infeasible solutions due to violation of the constraints, whereas the GA enforced satisfaction of the constraints and, therefore, always ensured a feasible solution. However, both optimization methods encountered difficulties when the number of design variables was increased. In general, the optimized panels weighed less than the comparable baseline panels.
Using SCOR as a Supply Chain Management Framework for Government Agency Contract Requirements
NASA Technical Reports Server (NTRS)
Paxton, Joe
2010-01-01
Enterprise Supply Chain Management consists of: Specifying suppliers to support inter-program and inter-agency efforts. Optimizing inventory levels and locations throughout the supply chain. Executing corrective actions to improve quality and lead time issues throughout the supply chain. Processing reported data to calculate and make visible supply chain performance (provide information for decisions and actions). Ensuring the right hardware and information is provided at the right time and in the right place. Monitoring the industrial base while developing, producing, operating and retiring a system. Seeing performance deep in the supply chain that could indicate issues affecting system availability and readiness.
Optimal radiotherapy dose schedules under parametric uncertainty
NASA Astrophysics Data System (ADS)
Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin
2016-01-01
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.
NASA Astrophysics Data System (ADS)
Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong
2016-09-01
Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.
The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management
NASA Technical Reports Server (NTRS)
Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param
2016-01-01
Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.
Degree of bioresorbable vascular scaffold expansion modulates loss of essential function.
Ferdous, Jahid; Kolachalama, Vijaya B; Kolandaivelu, Kumaran; Shazly, Tarek
2015-10-01
Drug-eluting bioresorbable vascular scaffolds (BVSs) have the potential to restore lumen patency, enable recovery of the native vascular environment, and circumvent late complications associated with permanent endovascular devices. To ensure therapeutic effects persist for sufficient times prior to scaffold resorption and resultant functional loss, many factors dictating BVS performance must be identified, characterized and optimized. While some factors relate to BVS design and manufacturing, others depend on device deployment and intrinsic vascular properties. Importantly, these factors interact and cannot be considered in isolation. The objective of this study is to quantify the extent to which degree of radial expansion modulates BVS performance, specifically in the context of modifying device erosion kinetics and evolution of structural mechanics and local drug elution. We systematically varied degree of radial expansion in model BVS constructs composed of poly dl-lactide-glycolide and generated in vitro metrics of device microstructure, degradation, erosion, mechanics and drug release. Experimental data permitted development of computational models that predicted transient concentrations of scaffold-derived soluble species and drug in the arterial wall, thus enabling speculation on the short- and long-term effects of differential expansion. We demonstrate that degree of expansion significantly affects scaffold properties critical to functionality, underscoring its relevance in BVS design and optimization. Bioresorbable vascular scaffold (BVS) therapy is beginning to transform the treatment of obstructive artery disease, owing to effective treatment of short term vessel closure while avoiding long term consequences such as in situ, late stent thrombosis - a fatal event associated with permanent implants such as drug-eluting stents. As device scaffolding and drug elution are temporary for BVS, the notion of using this therapy in lieu of existing, clinically approved devices seems attractive. However, there is still a limited understanding regarding the optimal lifetime and performance characteristics of erodible endovascular implants. Several engineering criteria must be met and clinical endpoints confirmed to ensure these devices are both safe and effective. In this manuscript, we sought to establish general principles for the design and deployment of erodible, drug-eluting endovascular scaffolds, with focus on how differential expansion can modulate device performance. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.
2016-12-01
Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.
NASA Astrophysics Data System (ADS)
Hinze, J. F.; Klein, S. A.; Nellis, G. F.
2015-12-01
Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.
The TMT instrumentation program
NASA Astrophysics Data System (ADS)
Simard, Luc; Crampton, David; Ellerbroek, Brent; Boyer, Corinne
2010-07-01
An overview of the current status of the Thirty Meter Telescope (TMT) instrumentation program is presented. Conceptual designs for the three first light instruments (IRIS, WFOS and IRMS) are in progress, as well as feasibility studies of MIRES. Considerable effort is underway to understand the end-to-end performance of the complete telescopeadaptive optics-instrument system under realistic conditions on Mauna Kea. Highly efficient operation is being designed into the TMT system, based on a detailed investigation of the observation workflow to ensure very fast target acquisition and set up of all subsystems. Future TMT instruments will almost certainly involve contributions from institutions in many different locations in North America and partner nations. Coordinating and optimizing the design and construction of the instruments to ensure delivery of the best possible scientific capabilities is an interesting challenge. TMT welcomes involvement from all interested instrument teams.
NASA Astrophysics Data System (ADS)
Ferhati, H.; Djeffal, F.
2017-12-01
In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Research on the performance evaluation of agricultural products supply chain integrated operation
NASA Astrophysics Data System (ADS)
Jiang, Jiake; Wang, Xifu; Liu, Yang
2017-04-01
The agricultural product supply chain integrated operation can ensure the quality and efficiency of agricultural products, and achieve the optimal goal of low cost and high service. This paper establishes a performance evaluation index system of agricultural products supply chain integration operation based on the development status of agricultural products and SCOR, BSC and KPI model. And then, we constructing rough set theory and BP neural network comprehensive evaluation model with the aid of Rosetta and MATLAB tools and the case study is about the development of agricultural products integrated supply chain in Jing-Jin-Ji region. And finally, we obtain the corresponding performance results, and give some improvement measures and management recommendations to the managers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weerakkody, Sean; Liu, Xiaofei; Sinopoli, Bruno
We consider the design and analysis of robust distributed control systems (DCSs) to ensure the detection of integrity attacks. DCSs are often managed by independent agents and are implemented using a diverse set of sensors and controllers. However, the heterogeneous nature of DCSs along with their scale leave such systems vulnerable to adversarial behavior. To mitigate this reality, we provide tools that allow operators to prevent zero dynamics attacks when as many as p agents and sensors are corrupted. Such a design ensures attack detectability in deterministic systems while removing the threat of a class of stealthy attacks in stochasticmore » systems. To achieve this goal, we use graph theory to obtain necessary and sufficient conditions for the presence of zero dynamics attacks in terms of the structural interactions between agents and sensors. We then formulate and solve optimization problems which minimize communication networks while also ensuring a resource limited adversary cannot perform a zero dynamics attacks. Polynomial time algorithms for design and analysis are provided.« less
Nutrition and Supplementation in Soccer.
Oliveira, César Chaves; Ferreira, Diogo; Caetano, Carlos; Granja, Diana; Pinto, Ricardo; Mendes, Bruno; Sousa, Mónica
2017-05-12
Contemporary elite soccer features increased physical demands during match-play, as well as a larger number of matches per season. Now more than ever, aspects related to performance optimization are highly regarded by both players and soccer coaches. Here, nutrition takes a special role as most elite teams try to provide an adequate diet to guarantee maximum performance while ensuring a faster recovery from matches and training exertions. It is currently known that manipulation and periodization of macronutrients, as well as sound hydration practices, have the potential to interfere with training adaptation and recovery. A careful monitoring of micronutrient status is also relevant to prevent undue fatigue and immune impairment secondary to a deficiency status. Furthermore, the sensible use of evidence-based dietary supplements may also play a role in soccer performance optimization. In this sense, several nutritional recommendations have been issued. This detailed and comprehensive review addresses the most relevant and up-to-date nutritional recommendations for elite soccer players, covering from macro and micronutrients to hydration and selected supplements in different contexts (daily requirements, pre, peri and post training/match and competition).
Nutrition and Supplementation in Soccer
Oliveira, César Chaves; Ferreira, Diogo; Caetano, Carlos; Granja, Diana; Pinto, Ricardo; Mendes, Bruno; Sousa, Mónica
2017-01-01
Contemporary elite soccer features increased physical demands during match-play, as well as a larger number of matches per season. Now more than ever, aspects related to performance optimization are highly regarded by both players and soccer coaches. Here, nutrition takes a special role as most elite teams try to provide an adequate diet to guarantee maximum performance while ensuring a faster recovery from matches and training exertions. It is currently known that manipulation and periodization of macronutrients, as well as sound hydration practices, have the potential to interfere with training adaptation and recovery. A careful monitoring of micronutrient status is also relevant to prevent undue fatigue and immune impairment secondary to a deficiency status. Furthermore, the sensible use of evidence-based dietary supplements may also play a role in soccer performance optimization. In this sense, several nutritional recommendations have been issued. This detailed and comprehensive review addresses the most relevant and up-to-date nutritional recommendations for elite soccer players, covering from macro and micronutrients to hydration and selected supplements in different contexts (daily requirements, pre, peri and post training/match and competition). PMID:29910389
Optimized Delivery System Achieves Enhanced Endomyocardial Stem Cell Retention
Behfar, Atta; Latere, Jean-Pierre; Bartunek, Jozef; Homsy, Christian; Daro, Dorothee; Crespo-Diaz, Ruben J.; Stalboerger, Paul G.; Steenwinckel, Valerie; Seron, Aymeric; Redfield, Margaret M.; Terzic, Andre
2014-01-01
Background Regenerative cell-based therapies are associated with limited myocardial retention of delivered stem cells. The objective of this study is to develop an endocardial delivery system for enhanced cell retention. Methods and Results Stem cell retention was simulated in silico using one and three-dimensional models of tissue distortion and compliance associated with delivery. Needle designs, predicted to be optimal, were accordingly engineered using nitinol – a nickel and titanium alloy displaying shape memory and super-elasticity. Biocompatibility was tested with human mesenchymal stem cells. Experimental validation was performed with species-matched cells directly delivered into Langendorff-perfused porcine hearts or administered percutaneously into the endocardium of infarcted pigs. Cell retention was quantified by flow cytometry and real time quantitative polymerase chain reaction methodology. Models, computing optimal distribution of distortion calibrated to favor tissue compliance, predicted that a 75°-curved needle featuring small-to-large graded side holes would ensure the highest cell retention profile. In isolated hearts, the nitinol curved needle catheter (C-Cath) design ensured 3-fold superior stem cell retention compared to a standard needle. In the setting of chronic infarction, percutaneous delivery of stem cells with C-Cath yielded a 37.7±7.1% versus 10.0±2.8% retention achieved with a traditional needle, without impact on biocompatibility or safety. Conclusions Modeling guided development of a nitinol-based curved needle delivery system with incremental side holes achieved enhanced myocardial stem cell retention. PMID:24326777
CFD Analysis and Design Optimization Using Parallel Computers
NASA Technical Reports Server (NTRS)
Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James
1997-01-01
A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
[Development of a medical equipment support information system based on PDF portable document].
Cheng, Jiangbo; Wang, Weidong
2010-07-01
According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data
Optimization of airport security process
NASA Astrophysics Data System (ADS)
Wei, Jianan
2017-05-01
In order to facilitate passenger travel, on the basis of ensuring public safety, the airport security process and scheduling to optimize. The stochastic Petri net is used to simulate the single channel security process, draw the reachable graph, construct the homogeneous Markov chain to realize the performance analysis of the security process network, and find the bottleneck to limit the passenger throughput. Curve changes in the flow of passengers to open a security channel for the initial state. When the passenger arrives at a rate that exceeds the processing capacity of the security channel, it is queued. The passenger reaches the acceptable threshold of the queuing time as the time to open or close the next channel, simulate the number of dynamic security channel scheduling to reduce the passenger queuing time.
Meniscus repair: the role of accelerated rehabilitation in return to sport.
Kozlowski, Erick J; Barcia, Anthony M; Tokish, John M
2012-06-01
With increasing understanding of the detrimental effects of the meniscectomized knee on outcomes and long-term durability, there is an ever increasing emphasis on meniscal preservation through repair. Repair in the young athlete is particularly challenging given the goals of returning to high-level sports. A healed meniscus is only the beginning of successful return to activity, and the understanding of "protection with progression" must be emphasized to ensure optimal return to performance. The principles of progression from low to high loads, single to multiplane activity, slow to high speeds, and stable to unstable platforms are cornerstones to this process. Emphasis on the kinetic chain environment that the knee will function within cannot be overemphasized. Communication between the operating surgeon and rehabilitation specialist is critical to optimizing effective return to sports.
Optimized star sensors laboratory calibration method using a regularization neural network.
Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen
2018-02-10
High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.
Treatment of chronic myeloid leukemia: assessing risk, monitoring response, and optimizing outcome.
Shanmuganathan, Naranie; Hiwase, Devendra Keshaorao; Ross, David Morrall
2017-12-01
Over the past two decades, tyrosine kinase inhibitors have become the foundation of chronic myeloid leukemia (CML) treatment. The choice between imatinib and newer tyrosine kinase inhibitors (TKIs) needs to be balanced against the known toxicity and efficacy data for each drug, the therapeutic goal being to maximize molecular response assessed by BCR-ABL RQ-PCR assay. There is accumulating evidence that the early achievement of molecular targets is a strong predictor of superior long-term outcomes. Early response assessment provides the opportunity to intervene early with the aim of ensuring an optimal response. Failure to achieve milestones or loss of response can have diverse causes. We describe how clinical and laboratory monitoring can be used to ensure that each patient is achieving an optimal response and, in patients who do not reach optimal response milestones, how the monitoring results can be used to detect resistance and understand its origins.
Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems
NASA Astrophysics Data System (ADS)
Lang, Hans-Dieter; Sarris, Costas D.
2017-11-01
An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.
Best practice in wound assessment.
Benbow, Maureen
2016-03-02
Accurate and considered wound assessment is essential to fulfil professional nursing requirements and ensure appropriate patient and wound management. This article describes the main aspects of holistic assessment of the patient and the wound, including identifying patient risk factors and comorbidities, and factors affecting wound healing to ensure optimal outcomes.
Simulation and Shoulder Dystocia.
Shaddeau, Angela K; Deering, Shad
2016-12-01
Shoulder dystocia is an unpredictable obstetric emergency that requires prompt interventions to ensure optimal outcomes. Proper technique is important but difficult to train given the urgent and critical clinical situation. Simulation training for shoulder dystocia allows providers at all levels to practice technical and teamwork skills in a no-risk environment. Programs utilizing simulation training for this emergency have consistently demonstrated improved performance both during practice drills and in actual patients with significantly decreased risks of fetal injury. Given the evidence, simulation training for shoulder dystocia should be conducted at all institutions that provide delivery services.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Design and implementation of online automatic judging system
NASA Astrophysics Data System (ADS)
Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng
2017-06-01
For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.
Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network
NASA Astrophysics Data System (ADS)
Xu, Xiao-Feng
2018-03-01
Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M.K.
1999-05-10
Using ORNL information on the characterization of the tank waste sludges, SRTC performed extensive bench-scale vitrification studies using simulants. Several glass systems were tested to ensure the optimum glass composition (based on the glass liquidus temperature, viscosity and durability) is determined. This optimum composition will balance waste loading, melt temperature, waste form performance and disposal requirements. By optimizing the glass composition, a cost savings can be realized during vitrification of the waste. The preferred glass formulation was selected from the bench-scale studies and recommended to ORNL for further testing with samples of actual OR waste tank sludges.
Zanga, Daniela; Capell, Teresa; Slafer, Gustavo A.; Christou, Paul; Savin, Roxana
2016-01-01
High-carotenoid corn (Carolight®) has been developed as a vehicle to deliver pro-vitamin A in the diet and thus address vitamin A deficiency in at-risk populations in developing countries. Like any other novel crop, the performance of Carolight® must be tested in different environments to ensure that optimal yields and productivity are maintained, particularly in this case to ensure that the engineered metabolic pathway does not attract a yield penalty. Here we compared the performance of Carolight® with its near isogenic white corn inbred parental line under greenhouse and field conditions, and monitored the stability of the introduced trait. We found that Carolight® was indistinguishable from its near isogenic line in terms of agronomic performance, particularly grain yield and its main components. We also established experimentally that the functionality of the introduced trait was indistinguishable when plants were grown in a controlled environment or in the field. Such thorough characterization under different agronomic conditions is rarely performed even for first-generation traits such as herbicide tolerance and pest resistance, and certainly not for complex second-generation traits such as the metabolic remodeling in the Carolight® variety. Our results therefore indicate that Carolight® can now be incorporated into breeding lines to generate hybrids with locally adapted varieties for further product development and assessment. PMID:27922071
Zanga, Daniela; Capell, Teresa; Slafer, Gustavo A; Christou, Paul; Savin, Roxana
2016-12-06
High-carotenoid corn (Carolight®) has been developed as a vehicle to deliver pro-vitamin A in the diet and thus address vitamin A deficiency in at-risk populations in developing countries. Like any other novel crop, the performance of Carolight® must be tested in different environments to ensure that optimal yields and productivity are maintained, particularly in this case to ensure that the engineered metabolic pathway does not attract a yield penalty. Here we compared the performance of Carolight® with its near isogenic white corn inbred parental line under greenhouse and field conditions, and monitored the stability of the introduced trait. We found that Carolight® was indistinguishable from its near isogenic line in terms of agronomic performance, particularly grain yield and its main components. We also established experimentally that the functionality of the introduced trait was indistinguishable when plants were grown in a controlled environment or in the field. Such thorough characterization under different agronomic conditions is rarely performed even for first-generation traits such as herbicide tolerance and pest resistance, and certainly not for complex second-generation traits such as the metabolic remodeling in the Carolight® variety. Our results therefore indicate that Carolight® can now be incorporated into breeding lines to generate hybrids with locally adapted varieties for further product development and assessment.
Role of effective nurse-patient relationships in enhancing patient safety.
Conroy, Tiffany; Feo, Rebecca; Boucaut, Rose; Alderman, Jan; Kitson, Alison
2017-08-02
Ensuring and maintaining patient safety is an essential aspect of care provision. Safety is a multidimensional concept, which incorporates interrelated elements such as physical and psychosocial safety. An effective nurse-patient relationship should ensure that these elements are considered when planning and providing care. This article discusses the importance of an effective nurse-patient relationship, as well as healthcare environments and working practices that promote safety, thus ensuring optimal patient care.
Randomized controlled trials in dentistry: common pitfalls and how to avoid them.
Fleming, Padhraig S; Lynch, Christopher D; Pandis, Nikolaos
2014-08-01
Clinical trials are used to appraise the effectiveness of clinical interventions throughout medicine and dentistry. Randomized controlled trials (RCTs) are established as the optimal primary design and are published with increasing frequency within the biomedical sciences, including dentistry. This review outlines common pitfalls associated with the conduct of randomized controlled trials in dentistry. Common failings in RCT design leading to various types of bias including selection, performance, detection and attrition bias are discussed in this review. Moreover, methods of minimizing and eliminating bias are presented to ensure that maximal benefit is derived from RCTs within dentistry. Well-designed RCTs have both upstream and downstream uses acting as a template for development and populating systematic reviews to permit more precise estimates of treatment efficacy and effectiveness. However, there is increasing awareness of waste in clinical research, whereby resource-intensive studies fail to provide a commensurate level of scientific evidence. Waste may stem either from inappropriate design or from inadequate reporting of RCTs; the importance of robust conduct of RCTs within dentistry is clear. Optimal reporting of randomized controlled trials within dentistry is necessary to ensure that trials are reliable and valid. Common shortcomings leading to important forms or bias are discussed and approaches to minimizing these issues are outlined. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Zhan; Yan, Xuefeng
2018-04-01
Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.
PDEMOD: Software for control/structures optimization
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr.; Zimmerman, David
1991-01-01
Because of the possibility of adverse interaction between the control system and the structural dynamics of large, flexible spacecraft, great care must be taken to ensure stability and system performance. Because of the high cost of insertion of mass into low earth orbit, it is prudent to optimize the roles of structure and control systems simultaneously. Because of the difficulty and the computational burden in modeling and analyzing the control structure system dynamics, the total problem is often split and treated iteratively. It would aid design if the control structure system dynamics could be represented in a single system of equations. With the use of the software PDEMOD (Partial Differential Equation Model), it is now possible to optimize structure and control systems simultaneously. The distributed parameter modeling approach enables embedding the control system dynamics into the same equations for the structural dynamics model. By doing this, the current difficulties involved in model order reduction are avoided. The NASA Mini-MAST truss is used an an example for studying integrated control structure design.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
Nedza, Susan M
2009-12-01
As the government attempts to address the high cost of health care in the United States, the issues being confronted include variations in the quality of care administered and the inconsistent application of scientifically proven treatments. To improve quality, methods of measurement and reporting with rewards or, eventually, penalties based on performance, must be developed. To date, well-intentioned national policy initiatives, such as value-based purchasing, have focused primarily on the measurement of discrete events and on attempts to construct incentives. While important, the current approach alone cannot improve quality, ensure equitability, decrease variability, and optimize value. Additional thought-leadership is required, both theoretical and applied. Academic medical centers' (AMCs') scholarly and practical participation is needed. Although quality cannot be sustainably improved without measurement, the existing measures alone do not ensure quality. There is not enough evidence to support strong measure development and, further, not enough insight regarding whether the existing measures have their intended effect of enhancing health care delivery that results in quality outcomes for patients. Perhaps the only way that the United States health care system will achieve a standard of quality care is through the strong embrace, effective engagement, intellectual insights, educational contributions, and practical applications in AMCs. Quality will never be achieved through public policies or national initiatives alone but instead through the commitment of the academic community to forward the science of performance measurement and to ensure that measurement leads to better health outcomes for our nation.
NASA Astrophysics Data System (ADS)
Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.
2017-12-01
Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.
Highly light-weighted ZERODUR mirrors
NASA Astrophysics Data System (ADS)
Behar-Lafenetre, Stéphanie; Lasic, Thierry; Viale, Roger; Mathieu, Jean-Claude; Ruch, Eric; Tarreau, Michel; Etcheto, Pierre
2017-11-01
Due to more and more stringent requirements for observation missions, diameter of primary mirrors for space telescopes is increasing. Difficulty is then to have a design stiff enough to be able to withstand launch loads and keep a reasonable mass while providing high opto-mechanical performance. Among the possible solutions, Thales Alenia Space France has investigated optimization of ZERODUR mirrors. Indeed this material, although fragile, is very well mastered and its characteristics well known. Moreover, its thermo-elastic properties (almost null CTE) is unequalled yet, in particular at ambient temperature. Finally, this material can be polished down to very low roughness without any coating. Light-weighting can be achieved by two different means : either optimizing manufacturing parameters or optimizing design (or both). Manufacturing parameters such as walls and optical face thickness have been improved and tested on representative breadboards defined on the basis of SAGEM-REOSC and Thales Alenia Space France expertise and realized by SAGEM-REOSC. In the frame of CNES Research and Technology activities, specific mass has been decreased down to 36 kg/m2. Moreover SNAP study dealt with a 2 m diameter primary mirror. Design has been optimized by Thales Alenia Space France while using classical manufacturing parameters - thus ensuring feasibility and costs. Mass was decreased down to 60 kg/m2 for a gravity effect of 52 nm. It is thus demonstrated that high opto-mechanical performance can be guaranteed with large highly lightweighted ZERODUR mirrors.
NASA Astrophysics Data System (ADS)
Elarusi, Abdulmunaem; Attar, Alaa; Lee, HoSung
2018-02-01
The optimum design of a thermoelectric system for application in car seat climate control has been modeled and its performance evaluated experimentally. The optimum design of the thermoelectric device combining two heat exchangers was obtained by using a newly developed optimization method based on the dimensional technique. Based on the analytical optimum design results, commercial thermoelectric cooler and heat sinks were selected to design and construct the climate control heat pump. This work focuses on testing the system performance in both cooling and heating modes to ensure accurate analytical modeling. Although the analytical performance was calculated using the simple ideal thermoelectric equations with effective thermoelectric material properties, it showed very good agreement with experiment for most operating conditions.
On-orbit Performance and Calibration of the HMI Instrument
NASA Astrophysics Data System (ADS)
Hoeksema, J. Todd; Bush, Rock; HMI Calibration Team
2016-10-01
The Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) has observed the Sun almost continuously since the completion of commissioning in May 2010, returning more than 100,000,000 filtergrams from geosynchronous orbit. Diligent and exhaustive monitoring of the instrument's performance ensures that HMI functions properly and allows proper calibration of the full-disk images and processing of the HMI observables. We constantly monitor trends in temperature, pointing, mechanism behavior, and software errors. Cosmic ray contamination is detected and bad pixels are removed from each image. Routine calibration sequences and occasional special observing programs are used to measure the instrument focus, distortion, scattered light, filter profiles, throughput, and detector characteristics. That information is used to optimize instrument performance and adjust calibration of filtergrams and observables.
Kang, Heesuk; Hollister, Scott J; La Marca, Frank; Park, Paul; Lin, Chia-Ying
2013-10-01
Biodegradable cages have received increasing attention for their use in spinal procedures involving interbody fusion to resolve complications associated with the use of nondegradable cages, such as stress shielding and long-term foreign body reaction. However, the relatively weak initial material strength compared to permanent materials and subsequent reduction due to degradation may be problematic. To design a porous biodegradable interbody fusion cage for a preclinical large animal study that can withstand physiological loads while possessing sufficient interconnected porosity for bony bridging and fusion, we developed a multiscale topology optimization technique. Topology optimization at the macroscopic scale provides optimal structural layout that ensures mechanical strength, while optimally designed microstructures, which replace the macroscopic material layout, ensure maximum permeability. Optimally designed cages were fabricated using solid, freeform fabrication of poly(ε-caprolactone) mixed with hydroxyapatite. Compression tests revealed that the yield strength of optimized fusion cages was two times that of typical human lumbar spine loads. Computational analysis further confirmed the mechanical integrity within the human lumbar spine, although the pore structure locally underwent higher stress than yield stress. This optimization technique may be utilized to balance the complex requirements of load-bearing, stress shielding, and interconnected porosity when using biodegradable materials for fusion cages.
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Recovery Act: Training Program Development for Commercial Building Equipment Technicians
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leah Glameyer
The overall goal of this project has been to develop curricula, certification requirements, and accreditation standards for training on energy efficient practices and technologies for commercial building technicians. These training products will advance industry expertise towards net-zero energy commercial building goals and will result in a substantial reduction in energy use. The ultimate objective is to develop a workforce that can bring existing commercial buildings up to their energy performance potential and ensure that new commercial buildings do not fall below their expected optimal level of performance. Commercial building equipment technicians participating in this training program will learn how tomore » best operate commercial buildings to ensure they reach their expected energy performance level. The training is a combination of classroom, online and on-site lessons. The Texas Engineering Extension Service (TEEX) developed curricula using subject matter and adult learning experts to ensure the training meets certification requirements and accreditation standards for training these technicians. The training targets a specific climate zone to meets the needs, specialized expertise, and perspectives of the commercial building equipment technicians in that zone. The combination of efficient operations and advanced design will improve the internal built environment of a commercial building by increasing comfort and safety, while reducing energy use and environmental impact. Properly trained technicians will ensure equipment operates at design specifications. A second impact is a more highly trained workforce that is better equipped to obtain employment. Organizations that contributed to the development of the training program include TEEX and the Texas Engineering Experiment Station (TEES) (both members of The Texas A&M University System). TEES is also a member of the Building Commissioning Association. This report includes a description of the project accomplishments, including the course development phases, tasks associated with each phase, and detailed list of the course materials developed. A summary of each year's activities is also included.« less
NASA Astrophysics Data System (ADS)
Vitório, Paulo Cezar; Leonel, Edson Denner
2017-12-01
The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.
HRP Chief Scientist's Office: Conducting Research to Enable Deep Space Exploration
NASA Technical Reports Server (NTRS)
Charles, J. B.; Fogarty, J.; Vega, L.; Cromwell, R. L.; Haven, C. P.; McFather, J. C.; Savelev, I.
2017-01-01
The HRP Chief Scientist's Office sets the scientific agenda for the Human Research Program. As NASA plans for deep space exploration, HRP is conducting research to ensure the health of astronauts, and optimize human performance during extended duration missions. To accomplish this research, HRP solicits for proposals within the U.S., collaborates with agencies both domestically and abroad, and makes optimal use of ISS resources in support of human research. This session will expand on these topics and provide an opportunity for questions and discussion with the HRP Chief Scientist. Presentations in this session will include: NRA solicitations - process improvements and focus for future solicitations, Multilateral Human Research Panel for Exploration - future directions (MHRPE 2.0), Extramural liaisons - National Science Foundation (NSF) and Department of Defense (DOD), Standardized Measures for spaceflight, Ground-based Analogs - international collaborations, and International data sharing.
Intracerebral Cell Implantation: Preparation and Characterization of Cell Suspensions.
Rossetti, Tiziana; Nicholls, Francesca; Modo, Michel
2016-01-01
Intracerebral cell transplantation is increasingly finding a clinical translation. However, the number of cells surviving after implantation is low (5-10%) compared to the number of cells injected. Although significant efforts have been made with regard to the investigation of apoptosis of cells after implantation, very little optimization of cell preparation and administration has been undertaken. Moreover, there is a general neglect of the biophysical aspects of cell injection. Cell transplantation can only be an efficient therapeutic approach if an optimal transfer of cells from the dish to the brain can be ensured. We therefore focused on the in vitro aspects of cell preparation of a clinical-grade human neural stem cell (NSC) line for intracerebral cell implantation. NSCs were suspended in five different vehicles: phosphate-buffered saline (PBS), Dulbecco's modified Eagle medium (DMEM), artificial cerebral spinal fluid (aCSF), HypoThermosol, and Pluronic. Suspension accuracy, consistency, and cell settling were determined for different cell volume fractions in addition to cell viability, cell membrane damage, and clumping. Maintenance of cells in suspension was evaluated while being stored for 8 h on ice, at room temperature, or physiological normothermia. Significant differences between suspension vehicles and cellular volume fractions were evident. HypoThermosol and Pluronic performed best, with PBS, aCSF, and DMEM exhibiting less consistency, especially in maintaining a suspension and preserving viability under different storage conditions. These results provide the basis to further investigate these preparation parameters during the intracerebral delivery of NSCs to provide an optimized delivery process that can ensure an efficient clinical translation.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
Sabbatini, Amber K; Merck, Lisa H; Froemming, Adam T; Vaughan, William; Brown, Michael D; Hess, Erik P; Applegate, Kimberly E; Comfere, Nneka I
2015-12-01
Patient-centered emergency diagnostic imaging relies on efficient communication and multispecialty care coordination to ensure optimal imaging utilization. The construct of the emergency diagnostic imaging care coordination cycle with three main phases (pretest, test, and posttest) provides a useful framework to evaluate care coordination in patient-centered emergency diagnostic imaging. This article summarizes findings reached during the patient-centered outcomes session of the 2015 Academic Emergency Medicine consensus conference "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The primary objective was to develop a research agenda focused on 1) defining component parts of the emergency diagnostic imaging care coordination process, 2) identifying gaps in communication that affect emergency diagnostic imaging, and 3) defining optimal methods of communication and multidisciplinary care coordination that ensure patient-centered emergency diagnostic imaging. Prioritized research questions provided the framework to define a research agenda for multidisciplinary care coordination in emergency diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Sudhakar, N.; Rajasekar, N.; Akhil, Saya; Jyotheeswara Reddy, K.
2017-11-01
The boost converter is the most desirable DC-DC power converter for renewable energy applications for its favorable continuous input current characteristics. In other hand, these DC-DC converters known as practical nonlinear systems are prone to several types of nonlinear phenomena including bifurcation, quasiperiodicity, intermittency and chaos. These undesirable effects has to be controlled for maintaining normal periodic operation of the converter and to ensure the stability. This paper presents an effective solution to control the chaos in solar fed DC-DC boost converter since the converter experiences wide range of input power variation which leads to chaotic phenomena. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Nelder-Mead Enhanced Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The results are compared with the traditional methods. The obtained results of the proposed system ensures the operation of the converter within the controllable region.
The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing
NASA Astrophysics Data System (ADS)
Yilmazkaya, Emre; Ozcelik, Yilmaz
2016-02-01
Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benassi, Michaela; Di Murro, Luana; Tolu, Barbara, E-mail: barbara.tolu@gmail.com
This study aims at optimizing treatment planning in young patients affected by lymphoma (Stage II to III) by using an inclined board (IB) that allows reducing doses to the organs at risk. We evaluated 19 young patients affected by stage I to III lymphomas, referred to our Department for consolidation radiotherapy (RT) treatment on the mediastinum. Patients underwent 2 planning computed tomography (CT) scans performed in different positions: flat standard position and inclined position. A direct comparison between the different treatment plans was carried out analyzing dosimetric parameters obtained from dose-volume histograms generated for each plan. Comparison was performed tomore » evaluate the sparing obtained on breast and heart. Dosimetric evaluation was performed for the following organs at risk (OARs): mammary glands, lungs, and heart. A statistically significant advantage was reported for V{sub 5}, V{sub 20}, and V{sub 30} for the breast when using the inclined board. A similar result was obtained for V{sub 5} and V{sub 10} on the heart. No advantage was observed in lung doses. The use of a simple device, such as an inclined board, allows the optimization of treatment plan, especially in young female patients, by ensuring a significant reduction of the dose delivered to breast and heart.« less
Research on Taxiway Path Optimization Based on Conflict Detection
Zhou, Hang; Jiang, Xinxin
2015-01-01
Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485
Research on Taxiway Path Optimization Based on Conflict Detection.
Zhou, Hang; Jiang, Xinxin
2015-01-01
Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
Cold Spraying of Cu-Al-Bronze for Cavitation Protection in Marine Environments
NASA Astrophysics Data System (ADS)
Krebs, S.; Gärtner, F.; Klassen, T.
2015-01-01
Traveling at high speeds, ships have to face the problem of rudder cavitation-erosion. At present, the problem is countered by fluid dynamically optimized rudders, synthetic, and weld-cladded coatings on steel basis. Nevertheless, docking and repair is required after certain intervals. Bulk Cu-Al-bronzes are in use at ships propellers to withstand corrosion and cavitation. Deposited as coatings with bulk-like properties, such bronzes could also enhance rudder life times. The present study investigates the coating formation by cold spraying CuAl10Fe5Ni5 bronze powders. By calculations of the impact conditions, the range of optimum spray parameters was preselected in terms of the coating quality parameter η on steel substrates with different temperatures. As-atomized and annealed powders were compared to optimize cavitation resistance of the coatings. Results provide insights about the interplay between the mechanical properties of powder and substrate for coating formation. Single particle impact morphologies visualize the deformation behavior. Coating performance was assessed by analyzing microstructures, bond strength, and cavitation resistance. These first results demonstrate that cold-sprayed bronze coatings have a high potential for ensuring a good performances in rudder protection. With further optimization, such coatings could evolve towards a competitive alternative to existing anti-cavitation procedures.
High-Performance I/O: HDF5 for Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav
2015-01-01
Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less
High-Performance I/O: HDF5 for Lattice QCD
Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; ...
2017-05-09
Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less
Surgical approach to posterior inferior cerebellar artery aneurysms.
La Pira, Biagia; Sturiale, Carmelo Lucio; Della Pepa, Giuseppe Maria; Albanese, Alessio
2018-02-01
The far-lateral is a standardised approach to clip aneurysms of the posterior inferior cerebellar artery (PICA). Different variants can be adopted to manage aneurysms that differ in morphology, topography, ruptured status, cerebellar swelling and surgeon preference. We distinguished five paradigmatic approaches aimed to manage aneurysms that are: proximal unruptured; proximal ruptured requiring posterior fossa decompression (PFD); proximal ruptured not requiring PFD; distal unruptured; distal ruptured. Preoperative planning in the setting of PICA aneurysm surgery is of paramount importance to perform an effective and safe procedure, to ensure an adequate PFD and optimal proximal control before aneurysm manipulation.
Rankin, Kristin M.; Gavin, Loretta; Moran, John W.; Kroelinger, Charlan D.; Vladutiu, Catherine J.; Goodman, David A.; Sappenfield, William M.
2018-01-01
Purpose In recognition of the importance of performance measurement and MCH epidemiology leadership to quality improvement (QI) efforts, a plenary session dedicated to this topic was presented at the 2014 CityMatCH Leadership and MCH Epidemiology Conference. This paper summarizes the session and provides two applications of performance measurement to QI in MCH. Description Performance measures addressing processes of care are ubiquitous in the current health system landscape and the MCH community is increasingly applying QI processes, such as Plan-Do-Study-Act (PDSA) cycles, to improve the effectiveness and efficiency of systems impacting MCH populations. QI is maximally effective when well-defined performance measures are used to monitor change. Assessment MCH epidemiologists provide leadership to QI initiatives by identifying population-based outcomes that would benefit from QI, defining and implementing performance measures, assessing and improving data quality and timeliness, reporting variability in measures throughout PDSA cycles, evaluating QI initiative impact, and translating findings to stakeholders. MCH epidemiologists can also ensure that QI initiatives are aligned with MCH priorities at the local, state and federal levels. Two examples of this work, one highlighting use of a contraceptive service performance measure and another describing QI for peripartum hemorrhage prevention, demonstrate MCH epidemiologists’ contributions throughout. Challenges remain in applying QI to complex community and systems-level interventions, including those aimed at improving access to quality care. Conclusion MCH epidemiologists provide leadership to QI initiatives by ensuring they are data-informed and supportive of a common MCH agenda, thereby optimizing the potential to improve MCH outcomes. PMID:27423235
Rankin, Kristin M; Gavin, Loretta; Moran, John W; Kroelinger, Charlan D; Vladutiu, Catherine J; Goodman, David A; Sappenfield, William M
2016-11-01
Purpose In recognition of the importance of performance measurement and MCH epidemiology leadership to quality improvement (QI) efforts, a plenary session dedicated to this topic was presented at the 2014 CityMatCH Leadership and MCH Epidemiology Conference. This paper summarizes the session and provides two applications of performance measurement to QI in MCH. Description Performance measures addressing processes of care are ubiquitous in the current health system landscape and the MCH community is increasingly applying QI processes, such as Plan-Do-Study-Act (PDSA) cycles, to improve the effectiveness and efficiency of systems impacting MCH populations. QI is maximally effective when well-defined performance measures are used to monitor change. Assessment MCH epidemiologists provide leadership to QI initiatives by identifying population-based outcomes that would benefit from QI, defining and implementing performance measures, assessing and improving data quality and timeliness, reporting variability in measures throughout PDSA cycles, evaluating QI initiative impact, and translating findings to stakeholders. MCH epidemiologists can also ensure that QI initiatives are aligned with MCH priorities at the local, state and federal levels. Two examples of this work, one highlighting use of a contraceptive service performance measure and another describing QI for peripartum hemorrhage prevention, demonstrate MCH epidemiologists' contributions throughout. Challenges remain in applying QI to complex community and systems-level interventions, including those aimed at improving access to quality care. Conclusion MCH epidemiologists provide leadership to QI initiatives by ensuring they are data-informed and supportive of a common MCH agenda, thereby optimizing the potential to improve MCH outcomes.
Costa - Introduction to 2015 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, James E.
In parallel with Sandia National Laboratories having two major locations (NM and CA), along with a number of smaller facilities across the nation, so too is the distribution of scientific, engineering and computing resources. As a part of Sandia’s Institutional Computing Program, CA site-based Sandia computer scientists and engineers have been providing mission and research staff with local CA resident expertise on computing options while also focusing on two growing high performance computing research problems. The first is how to increase system resilience to failure, as machines grow larger, more complex and heterogeneous. The second is how to ensure thatmore » computer hardware and configurations are optimized for specialized data analytical mission needs within the overall Sandia computing environment, including the HPC subenvironment. All of these activities support the larger Sandia effort in accelerating development and integration of high performance computing into national security missions. Sandia continues to both promote national R&D objectives, including the recent Presidential Executive Order establishing the National Strategic Computing Initiative and work to ensure that the full range of computing services and capabilities are available for all mission responsibilities, from national security to energy to homeland defense.« less
Muscle function in glenohumeral joint stability during lifting task.
Blache, Yoann; Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul; Dal Maso, Fabien
2017-01-01
Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task.
Muscle function in glenohumeral joint stability during lifting task
Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul
2017-01-01
Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task. PMID:29244838
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav
Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed
2018-05-01
Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning
NASA Astrophysics Data System (ADS)
Martin, Benjamin Charles
In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations
NASA Technical Reports Server (NTRS)
Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan
2003-01-01
Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Optimizing a Query by Transformation and Expansion.
Glocker, Katrin; Knurr, Alexander; Dieter, Julia; Dominick, Friederike; Forche, Melanie; Koch, Christian; Pascoe Pérez, Analie; Roth, Benjamin; Ückert, Frank
2017-01-01
In the biomedical sector not only the amount of information produced and uploaded into the web is enormous, but also the number of sources where these data can be found. Clinicians and researchers spend huge amounts of time on trying to access this information and to filter the most important answers to a given question. As the formulation of these queries is crucial, automated query expansion is an effective tool to optimize a query and receive the best possible results. In this paper we introduce the concept of a workflow for an optimization of queries in the medical and biological sector by using a series of tools for expansion and transformation of the query. After the definition of attributes by the user, the query string is compared to previous queries in order to add semantic co-occurring terms to the query. Additionally, the query is enlarged by an inclusion of synonyms. The translation into database specific ontologies ensures the optimal query formulation for the chosen database(s). As this process can be performed in various databases at once, the results are ranked and normalized in order to achieve a comparable list of answers for a question.
Physical and energy requirements of competitive swimming events.
Pyne, David B; Sharp, Rick L
2014-08-01
The aquatic sports competitions held during the summer Olympic Games include diving, open-water swimming, pool swimming, synchronized swimming, and water polo. Elite-level performance in each of these sports requires rigorous training and practice to develop the appropriate physiological, biomechanical, artistic, and strategic capabilities specific to each sport. Consequently, the daily training plans of these athletes are quite varied both between and within the sports. Common to all aquatic athletes, however, is that daily training and preparation consumes several hours and involves frequent periods of high-intensity exertion. Nutritional support for this high-level training is a critical element of the preparation of these athletes to ensure the energy and nutrient demands of the training and competition are met. In this article, we introduce the fundamental physical requirements of these sports and specifically explore the energetics of human locomotion in water. Subsequent articles in this issue explore the specific nutritional requirements of each aquatic sport. We hope that such exploration will provide a foundation for future investigation of the roles of optimal nutrition in optimizing performance in the aquatic sports.
Man as the main component of the closed ecological system of the spacecraft or planetary station.
Parin, V V; Adamovich, B A
1968-01-01
Current life-support systems of the spacecraft provide human requirements for food, water and oxygen only. Advanced life-support systems will involve man as their main component and will ensure completely his material and energy requirements. The design of individual components of such systems will assure their entire suitability and mutual control effects. Optimization of the performance of the crew and ecological system, on the basis of the information characterizing their function, demands efficient methods of collection and treatment of the information obtained through wireless recording of physiological parameters and their automatic treatment. Peculiarities of interplanetary missions and planetary stations make it necessary to conform the schedule of physiological recordings with the work-and-rest cycle of the space crew and inertness of components of the ecological system, especially of those responsible for oxygen regeneration. It is rational to model ecological systems and their components, taking into consideration the correction effect of the information on the health conditions and performance of the crewmen. Wide application of physiological data will allow the selection of optimal designs and sharply increase reliability of ecological systems.
An improved predictive functional control method with application to PMSM systems
NASA Astrophysics Data System (ADS)
Li, Shihua; Liu, Huixian; Fu, Wenshu
2017-01-01
In common design of prediction model-based control method, usually disturbances are not considered in the prediction model as well as the control design. For the control systems with large amplitude or strong disturbances, it is difficult to precisely predict the future outputs according to the conventional prediction model, and thus the desired optimal closed-loop performance will be degraded to some extent. To this end, an improved predictive functional control (PFC) method is developed in this paper by embedding disturbance information into the system model. Here, a composite prediction model is thus obtained by embedding the estimated value of disturbances, where disturbance observer (DOB) is employed to estimate the lumped disturbances. So the influence of disturbances on system is taken into account in optimisation procedure. Finally, considering the speed control problem for permanent magnet synchronous motor (PMSM) servo system, a control scheme based on the improved PFC method is designed to ensure an optimal closed-loop performance even in the presence of disturbances. Simulation and experimental results based on a hardware platform are provided to confirm the effectiveness of the proposed algorithm.
A Robust Kalman Framework with Resampling and Optimal Smoothing
Kautz, Thomas; Eskofier, Bjoern M.
2015-01-01
The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647
Azad, M A K; Krause, Tobias; Danter, Leon; Baars, Albert; Koch, Kerstin; Barthlott, Wilhelm
2017-06-06
Fog-collecting meshes show a great potential in ensuring the availability of a supply of sustainable freshwater in certain arid regions. In most cases, the meshes are made of hydrophilic smooth fibers. Based on the study of plant surfaces, we analyzed the fog collection using various polyethylene terephthalate (PET) fibers with different cross sections and surface structures with the aim of developing optimized biomimetic fog collectors. Water droplet movement and the onset of dripping from fiber samples were compared. Fibers with round, oval, and rectangular cross sections with round edges showed higher fog-collection performance than those with other cross sections. However, other parameters, for example, width, surface structure, wettability, and so forth, also influenced the performance. The directional delivery of the collected fog droplets by wavy/v-shaped microgrooves on the surface of the fibers enhances the formation of a water film and their fog collection. A numerical simulation of the water droplet spreading behavior strongly supports these findings. Therefore, our study suggests the use of fibers with a round cross section, a microgrooved surface, and an optimized width for an efficient fog collection.
Koné, Mongomaké; Koné, Tchoa; Silué, Nakpalo; Soumahoro, André Brahima; Kouakou, Tanoh Hilaire
2015-01-01
Bambara groundnut (Vigna subterranea (L.) Verdc.) is an indigenous grain legume. It occupies a prominent place in the strategies to ensure food security in sub-Saharan Africa. Development of an efficient in vitro regeneration system, a prerequisite for genetic transformation application, requires the establishment of optimal conditions for seeds germination and plantlets development. Three types of seeds were inoculated on different basal media devoid of growth regulators. Various strengths of the medium of choice and the type and concentration of carbon source were also investigated. Responses to germination varied with the type of seed. Embryonic axis (EA) followed by seeds without coat (SWtC) germinated rapidly and expressed a high rate of germination. The growth performances of plantlets varied with the basal medium composition and the seeds type. The optimal growth performances of plants were displayed on half strength MS basal medium with SWtC and EA as source of seeds. Addition of 3% sucrose in the culture medium was more suitable for a maximum growth of plantlets derived from EA.
Boivin-Desrochers, Camille; Alderson, Marie
2014-10-01
The nursing profession is faced with an issue of growing concern, that of the mental health of its practitioners. The many difficulties that nurses experience in the workplace may prove to be detrimental to the maintenance of an optimal mental state. With respect to these difficulties, several strategies can be implemented and used by nurses and managers. The present literature review aims to identify the difficulties and suffering experienced by nurses and the strategies employed to ensure the preservation of mental health, as well as maintaining the calling of the profession and job performance. It also aims to provide nurses and managers of the health care system with ideas to promote optimal mental health for nurses. In this context, « psychodynamique du travail » was chosen as the framework to structure the analysis of the literature dealing with elements surrounding the suffering and difficulties experienced by nurses. The use of this theoretical framework deepens and supports the relationship between the suffering experienced at work and the mental health of nurses.
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
Carozzi, Francesca Maria; Del Mistro, Annarosa; Cuschieri, Kate; Frayle, Helena; Sani, Cristina; Burroni, Elena
2016-03-01
This review aims to highlight the importance of Quality Assurance for Laboratories performing HPV test for Cervical Cancer Screening. An HPV test, to be used as primary screening test, must be validated according to international criteria, based on comparison of its clinical accuracy to HC2 or GP5+/6+ PCR-EIA tests. The number of validated platforms is increasing and appropriate Quality Assurance Programs (QAPs) which can interrogate longitudinal robustness and quality are paramount. This document describes the following topics: (1) the characteristics of an HPV laboratory and the personnel training needs, to ensure an elevated quality of the entire process and the optimal use of the resources; (2) the Quality Assurance, as both internal (IQA) and external quality assessment (EQA) systems, to be implemented and performed, and the description of the existing EQAs, including limitations; (3) general considerations for an optimal EQA program for hrHPV primary screening Due to the importance of Quality Assurance for this field, international efforts are necessary to improve QA International Collaboration. Copyright © 2015 Elsevier B.V. All rights reserved.
An improved reaction path optimization method using a chain of conformations
NASA Astrophysics Data System (ADS)
Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro
2018-05-01
The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.
Optimizing latency in Xilinx FPGA implementations of the GBT
NASA Astrophysics Data System (ADS)
Muschter, S.; Baron, S.; Bohm, C.; Cachemiche, J.-P.; Soos, C.
2010-12-01
The GigaBit Transceiver (GBT) [1] system has been developed to replace the Timing, Trigger and Control (TTC) system [2], currently used by LHC, as well as to provide data transmission between on-detector and off-detector components in future sLHC detectors. A VHDL version of the GBT-SERDES, designed for FPGAs, was released in March 2010 as a GBT-FPGA Starter Kit for future GBT users and for off-detector GBT implementation [3]. This code was optimized for resource utilization [4], as the GBT protocol is very demanding. It was not, however, optimized for latency — which will be a critical parameter when used in the trigger path. The GBT-FPGA Starter Kit firmware was first analyzed in terms of latency by looking at the separate components of the VHDL version. Once the parts which contribute most to the latency were identified and modified, two possible optimizations were chosen, resulting in a latency reduced by a factor of three. The modifications were also analyzed in terms of logic utilization. The latency optimization results were compared with measurement results from a Virtex 6 ML605 development board [5] equipped with a XC6VLX240T with speedgrade-1 and the package FF1156. Bit error rate tests were also performed to ensure an error free operation. The two final optimizations were analyzed for utilization and compared with the original code, distributed in the Starter Kit.
On the Water-Food Nexus: an Optimization Approach for Water and Food Security
NASA Astrophysics Data System (ADS)
Mortada, Sarah; Abou Najm, Majdi; Yassine, Ali; Alameddine, Ibrahim; El-Fadel, Mutasem
2016-04-01
Water and food security is facing increased challenges with population increase, climate and land use change, as well as resource depletion coupled with pollution and unsustainable practices. Coordinated and effective management of limited natural resources have become an imperative to meet these challenges by optimizing the usage of resources under various constraints. In this study, an optimization model is developed for optimal resource allocation towards sustainable water and food security under nutritional, socio-economic, agricultural, environmental, and natural resources constraints. The core objective of this model is to maximize the composite water-food security status by recommending an optimal water and agricultural strategy. The model balances between the healthy nutritional demand side and the constrained supply side while considering the supply chain in between. It equally ensures that the population achieves recommended nutritional guidelines and population food-preferences by quantifying an optimum agricultural and water policy through transforming optimum food demands into optimum cropping policy given the water and land footprints of each crop or agricultural product. Through this process, water and food security are optimized considering factors that include crop-food transformation (food processing), water footprints, crop yields, climate, blue and green water resources, irrigation efficiency, arable land resources, soil texture, and economic policies. The model performance regarding agricultural practices and sustainable food and water security was successfully tested and verified both at a hypothetical and pilot scale levels.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.
2017-01-01
The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075
Conception preliminaire de disques de turbine axiale pour moteurs d'aeronefs
NASA Astrophysics Data System (ADS)
Ouellet, Yannick
The preliminary design phase of a turbine rotor has an important impact on the architecture of a new engine definition, as it sets the technical orientation right from start and provides a good estimate of product performance, weight and cost. In addition, the execution speed at this preliminary phase has become critical into capturing business opportunities. Improving upfront accuracy also alleviates downstream detailed design work and therefore reduces overall product development cycle time. This preliminary phase contains elements slowing down its process, including low interoperability of currently used systems, incompatibility of software and ineffective management of data. In order to overcome these barriers, we have developed the first module of a new Design and Analysis (D&A) platform for the rotor disc. This complete platform ensures integration of different tools processing in batch mode, and is driven from a single graphical user interface. The platform developed has been linked with different optimization methods (algorithms, configuration) in order to automate the disc design and propose best practices for rotor structural optimization. This methodology allowed reduction in design cycle time and improvement of performance. It was applied on two reference P&WC axial discs. The platform's architecture was also used in the development of reference charts to better understand disc performance within given design space. Four high pressure rotor discs of P&WC turbofan and turboprop engines were used to generate the technical charts and understand the effect of various parameters. The new tools supporting disc D&A, combined with the optimization process and reference charts, has proven to be profitable in terms of component performance and engineering effort inputs.
NASA Astrophysics Data System (ADS)
Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.
2013-03-01
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
NASA Astrophysics Data System (ADS)
Howard, Steven J.; Burianová, Hana; Calleia, Alysha; Fynes-Clinton, Samuel; Kervin, Lisa; Bokosmaty, Sahar
2017-08-01
Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess students' competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that children's test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of students' true levels of competency.
Fractional Control of An Active Four-wheel-steering Vehicle
NASA Astrophysics Data System (ADS)
Wang, Tianting; Tong, Jun; Chen, Ning; Tian, Jie
2018-03-01
A four-wheel-steering (4WS) vehicle model and reference model with a drop filter are constructed. The decoupling of 4WS vehicle model is carried out. And a fractional PIλDμ controller is introduced into the decoupling strategy to reduce the effects of the uncertainty of the vehicle parameters as well as the unmodelled dynamics on the system performance. Based on optimization techniques, the design of fractional controller are obtained to ensure the robustness of 4WS vehicle during the special range of frequencies through proper choice of the constraints. In order to compare with fractional robust controller, an optimal controller for the same vehicle is also designed. The simulations of the two control systems are carried out and it reveals that the decoupling and fractional robust controller is able to make vehicle model trace the reference model very well with better robustness.
NASA Technical Reports Server (NTRS)
Aslam, Shahid; Jones, Hollis H.
2011-01-01
Care must always be taken when performing noise measurements on high-Tc superconducting materials to ensure that the results are not from the measurement system itself. One situation likely to occur is with low noise transformers. One of the least understood devices, it provides voltage gain for low impedance inputs (< 100 ), e.g., YBaCuO and GdBaCuO thin films, with comparatively lower noise levels than other devices for instance field effect and bipolar junction transistors. An essential point made in this paper is that because of the complex relationships between the transformer ports, input impedance variance alters the transformer s transfer function in particular, the low frequency cutoff shift. The transfer of external and intrinsic transformer noise to the output along with optimization and precautions are treated; all the while, we will cohesively connect the transfer function shift, the load impedance, and the actual noise at the transformer output.
Design and analysis of sustainable paper bicycle
NASA Astrophysics Data System (ADS)
Roni Sahroni, Taufik; Nasution, Januar
2017-12-01
This paper presents the design of sustainable paper bicycle which describes the stage by stage in the production of paper bicycle. The objective of this project is to design a sustainable paper bicycles to be used for children under five years old. The design analysis emphasizes in screening method to ensure the design fulfil the safety purposes. The evaluation concept is presented in designing a sustainable paper bicycle to determine highest rating. Project methodology is proposed for developing a sustainable paper bicycle. Design analysis of pedal, front and rear wheel, seat, and handle were presented using AutoCAD software. The design optimization was performed to fulfil the safety factors by modifying the material size and dimension. Based on the design analysis results, it is found that the optimization results met the factor safety. As a result, a sustainable paper bicycle was proposed for children under five years old.
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yan; Gu, Chongjie; Ruan, Xiulin, E-mail: ruan@purdue.edu
2015-02-16
A low lattice thermal conductivity (κ) is desired for thermoelectrics, and a highly anisotropic κ is essential for applications such as magnetic layers for heat-assisted magnetic recording, where a high cross-plane (perpendicular to layer) κ is needed to ensure fast writing while a low in-plane κ is required to avoid interaction between adjacent bits of data. In this work, we conduct molecular dynamics simulations to investigate the κ of superlattice (SL), random multilayer (RML) and alloy, and reveal that RML can have 1–2 orders of magnitude higher anisotropy in κ than SL and alloy. We systematically explore how the κmore » of SL, RML, and alloy changes relative to each other for different bond strength, interface roughness, atomic mass, and structure size, which provides guidance for choosing materials and structural parameters to build RMLs with optimal performance for specific applications.« less
NASA Astrophysics Data System (ADS)
Cicek, Paul-Vahe; Elsayed, Mohannad; Nabki, Frederic; El-Gamal, Mourad
2017-11-01
An above-IC compatible multi-level MEMS surface microfabrication technology based on a silicon carbide structural layer is presented. The fabrication process flow provides optimal electrostatic transduction by allowing the creation of independently controlled submicron vertical and lateral gaps without the need for high resolution lithography. Adopting silicon carbide as the structural material, the technology ensures material, chemical and thermal compatibility with modern semiconductor nodes, reporting the lowest peak processing temperature (i.e. 200 °C) of all comparable works. This makes this process ideally suited for integrating capacitive-based MEMS directly above standard CMOS substrates. Process flow design and optimization are presented in the context of bulk-mode disk resonators, devices that are shown to exhibit improved performance with respect to previous generation flexural beam resonators, and that represent relatively complex MEMS structures. The impact of impending improvements to the fabrication technology is discussed.
Discriminative region extraction and feature selection based on the combination of SURF and saliency
NASA Astrophysics Data System (ADS)
Deng, Li; Wang, Chunhong; Rao, Changhui
2011-08-01
The objective of this paper is to provide a possible optimization on salient region algorithm, which is extensively used in recognizing and learning object categories. Salient region algorithm owns the superiority of intra-class tolerance, global score of features and automatically prominent scale selection under certain range. However, the major limitation behaves on performance, and that is what we attempt to improve. By reducing the number of pixels involved in saliency calculation, it can be accelerated. We use interest points detected by fast-Hessian, the detector of SURF, as the candidate feature for saliency operation, rather than the whole set in image. This implementation is thereby called Saliency based Optimization over SURF (SOSU for short). Experiment shows that bringing in of such a fast detector significantly speeds up the algorithm. Meanwhile, Robustness of intra-class diversity ensures object recognition accuracy.
Software for Analyzing Laminar-to-Turbulent Flow Transitions
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2004-01-01
Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software. This is achieved by combining two prior programs in an innovative manner
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Generic comparison of protein inference engines.
Claassen, Manfred; Reiter, Lukas; Hengartner, Michael O; Buhmann, Joachim M; Aebersold, Ruedi
2012-04-01
Protein identifications, instead of peptide-spectrum matches, constitute the biologically relevant result of shotgun proteomics studies. How to appropriately infer and report protein identifications has triggered a still ongoing debate. This debate has so far suffered from the lack of appropriate performance measures that allow us to objectively assess protein inference approaches. This study describes an intuitive, generic and yet formal performance measure and demonstrates how it enables experimentalists to select an optimal protein inference strategy for a given collection of fragment ion spectra. We applied the performance measure to systematically explore the benefit of excluding possibly unreliable protein identifications, such as single-hit wonders. Therefore, we defined a family of protein inference engines by extending a simple inference engine by thousands of pruning variants, each excluding a different specified set of possibly unreliable identifications. We benchmarked these protein inference engines on several data sets representing different proteomes and mass spectrometry platforms. Optimally performing inference engines retained all high confidence spectral evidence, without posterior exclusion of any type of protein identifications. Despite the diversity of studied data sets consistently supporting this rule, other data sets might behave differently. In order to ensure maximal reliable proteome coverage for data sets arising in other studies we advocate abstaining from rigid protein inference rules, such as exclusion of single-hit wonders, and instead consider several protein inference approaches and assess these with respect to the presented performance measure in the specific application context.
Strain gage based determination of mixed mode SIFs
NASA Astrophysics Data System (ADS)
Murthy, K. S. R. K.; Sarangi, H.; Chakraborty, D.
2018-05-01
Accurate determination of mixed mode stress intensity factors (SIFs) is essential in understanding and analysis of mixed mode fracture of engineering components. Only a few strain gage determination of mixed mode SIFs are reported in literatures and those also do not provide any prescription for radial locations of strain gages to ensure accuracy of measurement. The present investigation experimentally demonstrates the efficacy of a proposed methodology for the accurate determination of mixed mode I/II SIFs using strain gages. The proposed approach is based on the modified Dally and Berger's mixed mode technique. Using the proposed methodology appropriate gage locations (optimal locations) for a given configuration have also been suggested ensuring accurate determination of mixed mode SIFs. Experiments have been conducted by locating the gages at optimal and non-optimal locations to study the efficacy of the proposed approach. The experimental results from the present investigation show that highly accurate SIFs (0.064%) can be determined using the proposed approach if the gages are located at the suggested optimal locations. On the other hand, results also show the very high errors (212.22%) in measured SIFs possible if the gages are located at non-optimal locations. The present work thus clearly substantiates the importance of knowing the optimal locations of the strain gages apriori in accurate determination of SIFs.
Economically viable large-scale hydrogen liquefaction
NASA Astrophysics Data System (ADS)
Cardella, U.; Decker, L.; Klein, H.
2017-02-01
The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.
Zhu, Yang; Morisato, Kei; Hasegawa, George; Moitra, Nirmalya; Kiyomura, Tsutomu; Kurata, Hiroki; Kanamori, Kazuyoshi; Nakanishi, Kazuki
2015-08-01
The optimization of a porous structure to ensure good separation performances is always a significant issue in high-performance liquid chromatography column design. Recently we reported the homogeneous embedment of Ag nanoparticles in periodic mesoporous silica monolith and the application of such Ag nanoparticles embedded silica monolith for the high-performance liquid chromatography separation of polyaromatic hydrocarbons. However, the separation performance remains to be improved and the retention mechanism as compared with the Ag ion high-performance liquid chromatography technique still needs to be clarified. In this research, Ag nanoparticles were introduced into a macro/mesoporous silica monolith with optimized pore parameters for high-performance liquid chromatography separations. Baseline separation of benzene, naphthalene, anthracene, and pyrene was achieved with the theoretical plate number for analyte naphthalene as 36,000 m(-1). Its separation function was further extended to cis/trans isomers of aromatic compounds where cis/trans stilbenes were chosen as a benchmark. Good separation of cis/trans-stilbene with separation factor as 7 and theoretical plate number as 76,000 m(-1) for cis-stilbene was obtained. The trans isomer, however, is retained more strongly, which contradicts the long- established retention rule of Ag ion chromatography. Such behavior of Ag nanoparticles embedded in a silica column can be attributed to the differences in the molecular geometric configuration of cis/trans stilbenes. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advanced structural design for precision radial velocity instruments
NASA Astrophysics Data System (ADS)
Baldwin, Dan; Szentgyorgyi, Andrew; Barnes, Stuart; Bean, Jacob; Ben-Ami, Sagi; Brennan, Patricia; Budynkiewicz, Jamie; Chun, Moo-Young; Conroy, Charlie; Crane, Jeffrey D.; Epps, Harland; Evans, Ian; Evans, Janet; Foster, Jeff; Frebel, Anna; Gauron, Thomas; Guzman, Dani; Hare, Tyson; Jang, Bi-Ho; Jang, Jeong-Gyun; Jordan, Andres; Kim, Jihun; Kim, Kang-Min; Mendes de Oliveira, Claudia; Lopez-Morales, Mercedes; McCracken, Kenneth; McMuldroch, Stuart; Miller, Joseph; Mueller, Mark; Oh, Jae Sok; Ordway, Mark; Park, Byeong-Gon; Park, Chan; Park, Sung-Joon; Paxson, Charles; Phillips, David; Plummer, David; Podgorski, William; Seifahrt, Andreas; Stark, Daniel; Steiner, Joao; Uomoto, Alan; Walsworth, Ronald; Yu, Young-Sam
2016-07-01
The GMT-Consortium Large Earth Finder (G-CLEF) is an echelle spectrograph with precision radial velocity (PRV) capability that will be a first light instrument for the Giant Magellan Telescope (GMT). G-CLEF has a PRV precision goal of 40 cm/sec (10 cm/s for multiple measurements) to enable detection of Earth-like exoplanets in the habitable zones of sun-like stars1. This precision is a primary driver of G-CLEF's structural design. Extreme stability is necessary to minimize image motions at the CCD detectors. Minute changes in temperature, pressure, and acceleration environments cause structural deformations, inducing image motions which degrade PRV precision. The instrument's structural design will ensure that the PRV goal is achieved under the environments G-CLEF will be subjected to as installed on the GMT azimuth platform, including: Millikelvin (0.001 °K) thermal soaks and gradients 10 millibar changes in ambient pressure Changes in acceleration due to instrument tip/tilt and telescope slewing Carbon fiber/cyanate composite was selected for the optical bench structure in order to meet performance goals. Low coefficient of thermal expansion (CTE) and high stiffness-to-weight are key features of the composite optical bench design. Manufacturability and serviceability of the instrument are also drivers of the design. In this paper, we discuss analyses leading to technical choices made to minimize G-CLEF's sensitivity to changing environments. Finite element analysis (FEA) and image motion sensitivity studies were conducted to determine PRV performance under operational environments. We discuss the design of the optical bench structure to optimize stiffness-to-weight and minimize deformations due to inertial and pressure effects. We also discuss quasi-kinematic mounting of optical elements and assemblies, and optimization of these to ensure minimal image motion under thermal, pressure, and inertial loads expected during PRV observations.
On the realization of the bulk modulus bounds for two-phase viscoelastic composites
NASA Astrophysics Data System (ADS)
Andreasen, Casper Schousboe; Andreassen, Erik; Jensen, Jakob Søndergaard; Sigmund, Ole
2014-02-01
Materials with good vibration damping properties and high stiffness are of great industrial interest. In this paper the bounds for viscoelastic composites are investigated and material microstructures that realize the upper bound are obtained by topology optimization. These viscoelastic composites can be realized by additive manufacturing technologies followed by an infiltration process. Viscoelastic composites consisting of a relatively stiff elastic phase, e.g. steel, and a relatively lossy viscoelastic phase, e.g. silicone rubber, have non-connected stiff regions when optimized for maximum damping. In order to ensure manufacturability of such composites the connectivity of the matrix is ensured by imposing a conductivity constraint and the influence on the bounds is discussed.
Methods to ensure optimal off-bottom and drill bit distance under pellet impact drilling
NASA Astrophysics Data System (ADS)
Kovalyov, A. V.; Isaev, Ye D.; Vagapov, A. R.; Urnish, V. V.; Ulyanova, O. S.
2016-09-01
The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rock for various purposes. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The paper presents the survey of methods ensuring an optimal off-bottom and a drill bit distance. The analysis of methods shows that the issue is topical and requires further research.
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
Ruan, D; Dong, P; Low, D; Sheng, K
2012-06-01
To develop and investigate a continuous path optimization methodology to traverse prescribed non-coplanar IMRT beams with variant SADs, by orchestrating the couch and gantry movement with zero-collision, minimal patient motion consequence and machine travel time. We convert the given collision zone definition and the prescribed beam location/angles to a tumor-centric coordinate, and represent the traversing path as a continuous open curve. We proceed to optimize a composite objective function consisting of (1) a strong attraction energy to ensure all prescribed beams are en-route, (2) a penalty for patient-motion inducing couch motion, and (3) a penalty for travel-time inducing overall path-length. Feasibility manifold is defined as complement to collision zone and the optimization is performed with a level set representation evolved with variational flows. The proposed method has been implemented and tested on clinically derived data. In the absence of any existing solutions for the same problem, we validate by: (1) visual inspecting the generated path rendered in the 3D tumor-centric coordinates, and (2) comparing with a traveling-salesman (TSP) solution obtained from relaxing the variant SADs and continuous collision-avoidance requirement. The proposed method has generated delivery paths that are smooth and intuitively appealing. Under relaxed settings, our results outperform the generic TSP solutions and agree with specially tuned versions. We have proposed a novel systematic approach that automatically determines the continuous path to cover non-coplanar, varying SAD IMRT beams. The proposed approach accommodates patient-specific collision zone definition and ensures its avoidance continuously. The differential penalty to couch and gantry motions allows customizable tradeoff between patient geometry stability and delivery efficiency. This development paves the path to achieve safe, accurate and efficient non-coplanar IMRT delivery with the advanced robotic controls in new-generation C-arm systems, enabling practical harvesting of the dose benefit offered by non-coplanar, variant SAD IMRT treatment. © 2012 American Association of Physicists in Medicine.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Wind offering in energy and reserve markets
NASA Astrophysics Data System (ADS)
Soares, T.; Pinson, P.; Morais, H.
2016-09-01
The increasing penetration of wind generation in power systems to fulfil the ambitious European targets will make wind power producers to play an even more important role in the future power system. Wind power producers are being incentivized to participate in reserve markets to increase their revenue, since currently wind turbine/farm technologies allow them to provide ancillary services. Thus, wind power producers are to develop offering strategies for participation in both energy and reserve markets, accounting for market rules, while ensuring optimal revenue. We consider a proportional offering strategy to optimally decide upon participation in both markets by maximizing expected revenue from day-ahead decisions while accounting for estimated regulation costs for failing to provide the services. An evaluation of considering the same proportional splitting of energy and reserve in both day- ahead and balancing market is performed. A set of numerical examples illustrate the behavior of such strategy. An important conclusion is that the optimal split of the available wind power between energy and reserve strongly depends upon prices and penalties on both market trading floors.
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
NASA Technical Reports Server (NTRS)
Holden, Kritina L.; Thompson, Shelby G.; Sandor, Aniko; McCann, Robert S.; Kaiser, Mary K.; Adelstein, Barnard D.; Begault, Durand R.; Beutter, Brent R.; Stone, Leland S.; Godfroy, Martine
2009-01-01
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew. In addition to addressing display design issues associated with information formatting, style, layout, and interaction, the Information Presentation DRP is also working toward understanding the effects of extreme environments encountered in space travel on information processing. Work is also in progress to refine human factors-based design tools, such as human performance modeling, that will supplement traditional design techniques and help ensure that optimal information design is accomplished in the most cost-efficient manner. The major areas of work, or subtasks, within the Information Presentation DRP for FY10 are: 1) Displays, 2) Controls, 3) Procedures and Fault Management, and 4) Human Performance Modeling. The poster will highlight completed and planned work for each subtask.
Switching State-Feedback LPV Control with Uncertain Scheduling Parameters
NASA Technical Reports Server (NTRS)
He, Tianyi; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming G.
2017-01-01
This paper presents a new method to design Robust Switching State-Feedback Gain-Scheduling (RSSFGS) controllers for Linear Parameter Varying (LPV) systems with uncertain scheduling parameters. The domain of scheduling parameters are divided into several overlapped subregions to undergo hysteresis switching among a family of simultaneously designed LPV controllers over the corresponding subregion with the guaranteed H-infinity performance. The synthesis conditions are given in terms of Parameterized Linear Matrix Inequalities that guarantee both stability and performance at each subregion and associated switching surfaces. The switching stability is ensured by descent parameter-dependent Lyapunov function on switching surfaces. By solving the optimization problem, RSSFGS controller can be obtained for each subregion. A numerical example is given to illustrate the effectiveness of the proposed approach over the non-switching controllers.
Choi, Sae Byeol; You, Jiyoung; Choi, Sang Yong
2012-01-10
Traumatic pancreaticoduodenal injury still remains challenging with high morbidity and mortality. Optimal management by performing simple and fast damage control surgery ensures better outcomes. A 36-year-old man was admitted with a combined pancreaticoduodenal injury after being assaulted. More than 80% of duodenal circumference (first portion) was disrupted and the neck of the pancreas was transected. Primary repair of the duodenum and pancreaticogastrostomy were performed. The stump of the proximal pancreatic duct was also sutured. The patient developed an intra-abdominal abscess with pancreatic fistula that eventually recovered by conservative treatment. Pancreaticogastrostomy can be a treatment option for pancreatic transection. Rapid and simple damage control surgery with functional preservation of the organ will be beneficial for trauma patients.
NASA Astrophysics Data System (ADS)
Auluck, S. K. H.
2016-12-01
Recent work on the revised Gratton-Vargas model (Auluck, Phys. Plasmas 20, 112501 (2013); 22, 112509 (2015) and references therein) has demonstrated that there are some aspects of Dense Plasma Focus (DPF), which are not sensitive to details of plasma dynamics and are well captured in an oversimplified model assumption, which contains very little plasma physics. A hyperbolic conservation law formulation of DPF physics reveals the existence of a velocity threshold related to specific energy of dissociation and ionization, above which, the work done during shock propagation is adequate to ensure dissociation and ionization of the gas being ingested. These developments are utilized to formulate an algorithmic definition of DPF optimization that is valid in a wide range of applications, not limited to neutron emission. This involves determination of a set of DPF parameters, without performing iterative model calculations, that lead to transfer of all the energy from the capacitor bank to the plasma at the time of current derivative singularity and conversion of a preset fraction of this energy into magnetic energy, while ensuring that electromagnetic work done during propagation of the plasma remains adequate for dissociation and ionization of neutral gas being ingested. Such a universal optimization criterion is expected to facilitate progress in new areas of DPF research that include production of short lived radioisotopes of possible use in medical diagnostics, generation of fusion energy from aneutronic fuels, and applications in nanotechnology, radiation biology, and materials science. These phenomena are expected to be optimized for fill gases of different kinds and in different ranges of mass density compared to the devices constructed for neutron production using empirical thumb rules. A universal scaling theory of DPF design optimization is proposed and illustrated for designing devices working at one or two orders higher pressure of deuterium than the current practice of designs optimized at pressures less than 10 mbar of deuterium. These examples show that the upper limit for operating pressure is of technological (and not physical) origin.
Design of planar microcoil-based NMR probe ensuring high SNR
NASA Astrophysics Data System (ADS)
Ali, Zishan; Poenar, D. P.; Aditya, Sheel
2017-09-01
A microNMR probe for ex vivo applications may consist of at least one microcoil, which can be used as the oscillating magnetic field (MF) generator as well as receiver coil, and a sample holder, with a volume in the range of nanoliters to micro-liters, placed near the microcoil. The Signal-to-Noise ratio (SNR) of such a probe is, however, dependent not only on its design but also on the measurement setup, and the measured sample. This paper introduces a performance factor P independent of both the proton spin density in the sample and the external DC magnetic field, and which can thus assess the performance of the probe alone. First, two of the components of the P factor (inhomogeneity factor K and filling factor η ) are defined and an approach to calculate their values for different probe variants from electromagnetic simulations is devised. A criterion based on dominant component of the magnetic field is then formulated to help designers optimize the sample volume which also affects the performance of the probe, in order to obtain the best SNR for a given planar microcoil. Finally, the P factor values are compared between different planar microcoils with different number of turns and conductor aspect ratios, and planar microcoils are also compared with conventional solenoids. These comparisons highlight which microcoil geometry-sample volume combination will ensure a high SNR under any external setup.
Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Murthy, P. L. N.; Morel, M.
1990-01-01
A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Murthy, P. L. N.; Morel, M.
1990-01-01
A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with nonlinear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.
Topology optimization of hyperelastic structures using a level set method
NASA Astrophysics Data System (ADS)
Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.
2017-12-01
Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.
Data-centric multiobjective QoS-aware routing protocol for body sensor networks.
Razzaque, Md Abdur; Hong, Choong Seon; Lee, Sungwon
2011-01-01
In this paper, we address Quality-of-Service (QoS)-aware routing issue for Body Sensor Networks (BSNs) in delay and reliability domains. We propose a data-centric multiobjective QoS-Aware routing protocol, called DMQoS, which facilitates the system to achieve customized QoS services for each traffic category differentiated according to the generated data types. It uses modular design architecture wherein different units operate in coordination to provide multiple QoS services. Their operation exploits geographic locations and QoS performance of the neighbor nodes and implements a localized hop-by-hop routing. Moreover, the protocol ensures (almost) a homogeneous energy dissipation rate for all routing nodes in the network through a multiobjective Lexicographic Optimization-based geographic forwarding. We have performed extensive simulations of the proposed protocol, and the results show that DMQoS has significant performance improvements over several state-of-the-art approaches.
Studies of turbulence models in a computational fluid dynamics model of a blood pump.
Song, Xinwei; Wood, Houston G; Day, Steven W; Olsen, Don B
2003-10-01
Computational fluid dynamics (CFD) is used widely in design of rotary blood pumps. The choice of turbulence model is not obvious and plays an important role on the accuracy of CFD predictions. TASCflow (ANSYS Inc., Canonsburg, PA, U.S.A.) has been used to perform CFD simulations of blood flow in a centrifugal left ventricular assist device; a k-epsilon model with near-wall functions was used in the initial numerical calculation. To improve the simulation, local grids with special distribution to ensure the k-omega model were used. Iterations have been performed to optimize the grid distribution and turbulence modeling and to predict flow performance more accurately comparing to experimental data. A comparison of k-omega model and experimental measurements of the flow field obtained by particle image velocimetry shows better agreement than k-epsilon model does, especially in the near-wall regions.
Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching.
Kasiiti, Noah; Wawira, Judy; Purkayastha, Saptarshi; Were, Martin C
2017-01-01
Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0.
EmptyHeaded: A Relational Engine for Graph Processing
Aberger, Christopher R.; Tu, Susan; Olukotun, Kunle; Ré, Christopher
2016-01-01
There are two types of high-performance graph processing engines: low- and high-level engines. Low-level engines (Galois, PowerGraph, Snap) provide optimized data structures and computation models but require users to write low-level imperative code, hence ensuring that efficiency is the burden of the user. In high-level engines, users write in query languages like datalog (SociaLite) or SQL (Grail). High-level engines are easier to use but are orders of magnitude slower than the low-level graph engines. We present EmptyHeaded, a high-level engine that supports a rich datalog-like query language and achieves performance comparable to that of low-level engines. At the core of EmptyHeaded’s design is a new class of join algorithms that satisfy strong theoretical guarantees but have thus far not achieved performance comparable to that of specialized graph processing engines. To achieve high performance, EmptyHeaded introduces a new join engine architecture, including a novel query optimizer and data layouts that leverage single-instruction multiple data (SIMD) parallelism. With this architecture, EmptyHeaded outperforms high-level approaches by up to three orders of magnitude on graph pattern queries, PageRank, and Single-Source Shortest Paths (SSSP) and is an order of magnitude faster than many low-level baselines. We validate that EmptyHeaded competes with the best-of-breed low-level engine (Galois), achieving comparable performance on PageRank and at most 3× worse performance on SSSP. PMID:28077912
Optimisation in the Design of Environmental Sensor Networks with Robustness Consideration
Budi, Setia; de Souza, Paulo; Timms, Greg; Malhotra, Vishv; Turner, Paul
2015-01-01
This work proposes the design of Environmental Sensor Networks (ESN) through balancing robustness and redundancy. An Evolutionary Algorithm (EA) is employed to find the optimal placement of sensor nodes in the Region of Interest (RoI). Data quality issues are introduced to simulate their impact on the performance of the ESN. Spatial Regression Test (SRT) is also utilised to promote robustness in data quality of the designed ESN. The proposed method provides high network representativeness (fit for purpose) with minimum sensor redundancy (cost), and ensures robustness by enabling the network to continue to achieve its objectives when some sensors fail. PMID:26633392
Scanner focus metrology and control system for advanced 10nm logic node
NASA Astrophysics Data System (ADS)
Oh, Junghun; Maeng, Kwang-Seok; Shin, Jae-Hyung; Choi, Won-Woong; Won, Sung-Keun; Grouwstra, Cedric; El Kodadi, Mohamed; Heil, Stephan; van der Meijden, Vidar; Hong, Jong Kyun; Kim, Sang-Jin; Kwon, Oh-Sung
2018-03-01
Immersion lithography is being extended beyond the 10-nm node and the lithography performance requirement needs to be tightened further to ensure good yield. Amongst others, good on-product focus control with accurate and dense metrology measurements is essential to enable this. In this paper, we will present new solutions that enable onproduct focus monitoring and control (mean and uniformity) suitable for high volume manufacturing environment. We will introduce the concept of pure focus and its role in focus control through the imaging optimizer scanner correction interface. The results will show that the focus uniformity can be improved by up to 25%.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
Jeong, Inyoung; Park, Yun Hee; Bae, Seunghwan; Park, Minwoo; Jeong, Hansol; Lee, Phillip; Ko, Min Jae
2017-10-25
The electron transport layer (ETL) is a key component of perovskite solar cells (PSCs) and must provide efficient electron extraction and collection while minimizing the charge recombination at interfaces in order to ensure high performance. Conventional bilayered TiO 2 ETLs fabricated by depositing compact TiO 2 (c-TiO 2 ) and mesoporous TiO 2 (mp-TiO 2 ) in sequence exhibit resistive losses due to the contact resistance at the c-TiO 2 /mp-TiO 2 interface and the series resistance arising from the intrinsically low conductivity of TiO 2 . Herein, to minimize such resistive losses, we developed a novel ETL consisting of an ultrathin c-TiO 2 layer hybridized with mp-TiO 2 , which is fabricated by performing one-step spin-coating of a mp-TiO 2 solution containing a small amount of titanium diisopropoxide bis(acetylacetonate) (TAA). By using electron microscopies and elemental mapping analysis, we establish that the optimal concentration of TAA produces an ultrathin blocking layer with a thickness of ∼3 nm and ensures that the mp-TiO 2 layer has a suitable porosity for efficient perovskite infiltration. We compare PSCs based on mesoscopic ETLs with and without compact layers to determine the role of the hole-blocking layer in their performances. The hybrid ETLs exhibit enhanced electron extraction and reduced charge recombination, resulting in better photovoltaic performances and reduced hysteresis of PSCs compared to those with conventional bilayered ETLs.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Measuring Dark Matter With MilkyWay@home
NASA Astrophysics Data System (ADS)
Shelton, Siddhartha; Newberg, Heidi Jo; Arsenault, Matthew; Bauer, Jacob; Desell, Travis; Judd, Roland; Magdon-Ismail, Malik; Newby, Matthew; Rice, Colin; Thompson, Jeffrey; Ulin, Steve; Weiss, Jake; Widrow, Larry
2016-01-01
We perform N-body simulations of two component dwarf galaxies (dark matter and stars follow separate distributions) falling into the Milky Way and the forming of tidal streams. Using MilkyWay@home we optimize the parameters of the progenitor dwarf galaxy and the orbital time to fit the simulated distribution of stars along the tidal stream to the observed distribution of stars. Our initial dwarf galaxy models are constructed with two separate Plummer profiles (one for the dark matter and one for the baryonic matter), sampled using a generalized distribution function for spherically symmetric systems. We perform rigorous testing to ensure that our simulated galaxies are in virial equilibrium, and stable over a simulation time. The N-body simulations are performed using a Barnes-Hut Tree algorithm. Optimization traverses the likelihood surface from our six model parameters using particle swarm and differential evolution methods. We have generated simulated data with known model parameters that are similar to those of the Orphan Stream. We show that we are able to recover a majority of our model parameters, and most importantly the mass-to-light ratio of the now disrupted progenitor galaxy, using MilkyWay@home. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
NASA Astrophysics Data System (ADS)
Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal
2015-05-01
When running data intensive applications on distributed computational resources long I/O overheads may be observed as access to remotely stored data is performed. Latencies and bandwidth can become the major limiting factor for the overall computation performance and can reduce the CPU/WallTime ratio to excessive IO wait. Reusing the knowledge of our previous research, we propose a constraint programming based planner that schedules computational jobs and data placements (transfers) in a distributed environment in order to optimize resource utilization and reduce the overall processing completion time. The optimization is achieved by ensuring that none of the resources (network links, data storages and CPUs) are oversaturated at any moment of time and either (a) that the data is pre-placed at the site where the job runs or (b) that the jobs are scheduled where the data is already present. Such an approach eliminates the idle CPU cycles occurring when the job is waiting for the I/O from a remote site and would have wide application in the community. Our planner was evaluated and simulated based on data extracted from log files of batch and data management systems of the STAR experiment. The results of evaluation and estimation of performance improvements are discussed in this paper.
Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions
Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin
2015-01-01
Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964
Cavitary Penetration of Levofloxacin among Patients with Multidrug-Resistant Tuberculosis
Barth, Aline B.; Vashakidze, Sergo; Nikolaishvili, Ketino; Sabulua, Irina; Tukvadze, Nestani; Bablishvili, Nino; Gogishvili, Shota; Singh, Ravi Shankar P.; Guarner, Jeannette; Derendorf, Hartmut; Peloquin, Charles A.; Blumberg, Henry M.
2015-01-01
A better understanding of second-line drug (SLD) pharmacokinetics, including cavitary penetration, may help optimize SLD dosing. Patients with pulmonary multidrug-resistant tuberculosis (MDR-TB) undergoing adjunctive surgery were enrolled in Tbilisi, Georgia. Serum was obtained at 0, 1, 4, and 8 h and at the time of cavitary removal to measure levofloxacin concentrations. After surgery, microdialysis was performed using the ex vivo cavity, and levofloxacin concentrations in the collected dialysate fluid were measured. Noncompartmental analysis was performed, and a cavitary-to-serum levofloxacin concentration ratio was calculated. Twelve patients received levofloxacin for a median of 373 days before surgery (median dose, 11.8 mg/kg). The median levofloxacin concentration in serum (Cmax) was 6.5 μg/ml, and it was <2 μg/ml in 3 (25%) patients. Among 11 patients with complete data, the median cavitary concentration of levofloxacin was 4.36 μg/ml (range, 0.46 to 8.82). The median cavitary/serum levofloxacin ratio was 1.33 (range, 0.63 to 2.36), and 7 patients (64%) had a ratio of >1. There was a significant correlation between serum and cavitary concentrations (r = 0.71; P = 0.01). Levofloxacin had excellent penetration into chronic cavitary TB lesions, and there was a good correlation between serum and cavitary concentrations. Optimizing serum concentrations will help ensure optimal cavitary concentrations of levofloxacin, which may enhance treatment outcomes. PMID:25779583
Optimum design of a composite structure with ply-interleaving constraints
NASA Technical Reports Server (NTRS)
Wang, Bo Ping; Costin, Daniel P.
1990-01-01
The application of composite materials to aircraft construction has provided the designer with increased flexibility. The orientation of plies can be tailored to provide additional aeroelastic performance unobtainable with an isotropic material. A tailored laminate is made up of plies of several orientations, usually 0 deg, 45 deg, -45 deg, and 90 deg. The direction of the 0 deg plies, does not need to be oriented with the leading edge, but can be varied to obtain a wide variety of structural properties. Also, the number of plies of each orientation varies from one zone to another on the planform. Thus, a thick laminate with mainly 0 deg plies may form the root zone, and a thinner laminate with mainly +45 deg plies may form the leading edge zone. Tailored laminates were designed using complicated optimization programs. Unfortunately, many tailored designs must be modified before they are manufactured. The modification adds weight and decreases performance. One type of modification is ply interleaving, an overlap of plies between zones on the laminate. These interleaves are added to ensure that zones with varying ply percentages can be connected without loss of strength. In this paper, the constraints needed to eliminate interleaves in the laminate optimization process will be described and implemented in a structural optimization problem. The method used has the potential to prevent changes to composite laminates late in the design cycle.
Optimized Li-Ion Electrolytes Containing Fluorinated Ester Co-Solvents
NASA Technical Reports Server (NTRS)
Prakash, G. K. Surya; Smart, Marshall; Smith, Kiah; Bugga, Ratnakumar
2010-01-01
A number of experimental lithium-ion cells, consisting of MCMB (meso-carbon microbeads) carbon anodes and LiNi(0.8)Co(0.2)O2 cathodes, have been fabricated with increased safety and expanded capability. These cells serve to verify and demonstrate the reversibility, low-temperature performance, and electrochemical aspects of each electrode as determined from a number of electrochemical characterization techniques. A number of Li-ion electrolytes possessing fluorinated ester co-solvents, namely trifluoroethyl butyrate (TFEB) and trifluoroethyl propionate (TFEP), were demonstrated to deliver good performance over a wide temperature range in experimental lithium-ion cells. The general approach taken in the development of these electrolyte formulations is to optimize the type and composition of the co-solvents in ternary and quaternary solutions, focusing upon adequate stability [i.e., EC (ethylene carbonate) content needed for anode passivation, and EMC (ethyl methyl carbonate) content needed for lowering the viscosity and widening the temperature range, while still providing good stability], enhancing the inherent safety characteristics (incorporation of fluorinated esters), and widening the temperature range of operation (the use of both fluorinated and non-fluorinated esters). Further - more, the use of electrolyte additives, such as VC (vinylene carbonate) [solid electrolyte interface (SEI) promoter] and DMAc (thermal stabilizing additive), provide enhanced high-temperature life characteristics. Multi-component electrolyte formulations enhance performance over a temperature range of -60 to +60 C. With the need for more safety with the use of these batteries, flammability was a consideration. One of the solvents investigated, TFEB, had the best performance with improved low-temperature capability and high-temperature resilience. This work optimized the use of TFEB as a co-solvent by developing the multi-component electrolytes, which also contain non-halogenated esters, film forming additives, thermal stabilizing additives, and flame retardant additives. Further optimization of these electrolyte formulations is anticipated to yield improved performance. It is also anticipated that much improved performance will be demonstrated once these electrolyte solutions are incorporated into hermetically sealed, large capacity prototype cells, especially if effort is devoted to ensure that all electrolyte components are highly pure.
Spatial frequency performance limitations of radiation dose optimization and beam positioning
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.
2018-06-01
The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.
A Queueing Approach to Optimal Resource Replication in Wireless Sensor Networks
2009-04-29
network (an energy- centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric ...replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy- centric ...approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure- centric approach). The model explicitly
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haman, R.L.; Kerry, T.G.; Jarc, C.A.
1996-12-31
A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less
Zhao, Xiuli; Yiranbon, Ethel
2014-01-01
The idea of aggregating information is clearly recognizable in the daily lives of all entities whether as individuals or as a group, since time immemorial corporate organizations, governments, and individuals as economic agents aggregate information to formulate decisions. Energy planning represents an investment-decision problem where information needs to be aggregated from credible sources to predict both demand and supply of energy. To do this there are varying methods ranging from the use of portfolio theory to managing risk and maximizing portfolio performance under a variety of unpredictable economic outcomes. The future demand for energy and need to use solar energy in order to avoid future energy crisis in Jiangsu province in China require energy planners in the province to abandon their reliance on traditional, “least-cost,” and stand-alone technology cost estimates and instead evaluate conventional and renewable energy supply on the basis of a hybrid of optimization models in order to ensure effective and reliable supply. Our task in this research is to propose measures towards addressing optimal solar energy forecasting by employing a systematic optimization approach based on a hybrid of weather and energy forecast models. After giving an overview of the sustainable energy issues in China, we have reviewed and classified the various models that existing studies have used to predict the influences of the weather influences and the output of solar energy production units. Further, we evaluate the performance of an exemplary ensemble model which combines the forecast output of two popular statistical prediction methods using a dynamic weighting factor. PMID:24511292
Zhao, Xiuli; Asante Antwi, Henry; Yiranbon, Ethel
2014-01-01
The idea of aggregating information is clearly recognizable in the daily lives of all entities whether as individuals or as a group, since time immemorial corporate organizations, governments, and individuals as economic agents aggregate information to formulate decisions. Energy planning represents an investment-decision problem where information needs to be aggregated from credible sources to predict both demand and supply of energy. To do this there are varying methods ranging from the use of portfolio theory to managing risk and maximizing portfolio performance under a variety of unpredictable economic outcomes. The future demand for energy and need to use solar energy in order to avoid future energy crisis in Jiangsu province in China require energy planners in the province to abandon their reliance on traditional, "least-cost," and stand-alone technology cost estimates and instead evaluate conventional and renewable energy supply on the basis of a hybrid of optimization models in order to ensure effective and reliable supply. Our task in this research is to propose measures towards addressing optimal solar energy forecasting by employing a systematic optimization approach based on a hybrid of weather and energy forecast models. After giving an overview of the sustainable energy issues in China, we have reviewed and classified the various models that existing studies have used to predict the influences of the weather influences and the output of solar energy production units. Further, we evaluate the performance of an exemplary ensemble model which combines the forecast output of two popular statistical prediction methods using a dynamic weighting factor.
[Microbiological Surveillance of Measles and Rubella in Spain. Laboratory Network].
Echevarría, Juan Emilio; Fernández García, Aurora; de Ory, Fernando
2015-01-01
The Laboratory is a fundamental component on the surveillance of measles and rubella. Cases need to be properly confirmed to ensure an accurate estimation of the incidence. Strains should be genetically characterized to know the transmission pattern of these viruses and frequently, outbreaks and transmission chains can be totally discriminated only after that. Finally, the susceptibility of the population is estimated on the basis of sero-prevalence surveys. Detection of specific IgM response is the base of the laboratory diagnosis of these diseases. It should be completed with genomic detection by RT-PCR to reach an optimal efficiency, especially when sampling is performed early in the course of the disease. Genotyping is performed by genomic sequencing according to reference protocols of the WHO. Laboratory surveillance of measles and rubella in Spain is organized as a net of regional laboratories with different capabilities. The National Center of Microbiology as National Reference Laboratory (NRL), supports regional laboratories ensuring the availability of all required techniques in the whole country and watching for the quality of the results. The NRL is currently working in the implementation of new molecular techniques based on the analysis of genomic hypervariable regions for the strain characterization at sub-genotypic levels and use them in the surveillance.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
Design optimization studies using COSMIC NASTRAN
NASA Technical Reports Server (NTRS)
Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.
1993-01-01
The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Hosseini, Seyed Ali; Shah, Nilay
2011-01-01
There is a large body of literature regarding the choice and optimization of different processes for converting feedstock to bioethanol and bio-commodities; moreover, there has been some reasonable technological development in bioconversion methods over the past decade. However, the eventual cost and other important metrics relating to sustainability of biofuel production will be determined not only by the performance of the conversion process, but also by the performance of the entire supply chain from feedstock production to consumption. Moreover, in order to ensure world-class biorefinery performance, both the network and the individual components must be designed appropriately, and allocation of resources over the resulting infrastructure must effectively be performed. The goal of this work is to describe the key challenges in bioenergy supply chain modelling and then to develop a framework and methodology to show how multi-scale modelling can pave the way to answer holistic supply chain questions, such as the prospects for second generation bioenergy crops. PMID:22482032
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Numerical and Experimental Study on Hydrodynamic Performance of A Novel Semi-Submersible Concept
NASA Astrophysics Data System (ADS)
Gao, Song; Tao, Long-bin; Kou, Yu-feng; Lu, Chao; Sun, Jiang-long
2018-04-01
Multiple Column Platform (MCP) semi-submersible is a newly proposed concept, which differs from the conventional semi-submersibles, featuring centre column and middle pontoon. It is paramount to ensure its structural reliability and safe operation at sea, and a rigorous investigation is conducted to examine the hydrodynamic and structural performance for the novel structure concept. In this paper, the numerical and experimental studies on the hydrodynamic performance of MCP are performed. Numerical simulations are conducted in both the frequency and time domains based on 3D potential theory. The numerical models are validated by experimental measurements obtained from extensive sets of model tests under both regular wave and irregular wave conditions. Moreover, a comparative study on MCP and two conventional semi-submersibles are carried out using numerical simulation. Specifically, the hydrodynamic characteristics, including hydrodynamic coefficients, natural periods and motion response amplitude operators (RAOs), mooring line tension are fully examined. The present study proves the feasibility of the novel MCP and demonstrates the potential possibility of optimization in the future study.
Reichert, Bárbara; de Kok, André; Pizzutti, Ionara Regina; Scholten, Jos; Cardoso, Carmem Dickow; Spanjer, Martien
2018-04-03
This paper describes the optimization and validation of an acetonitrile based method for simultaneous extraction of multiple pesticides and mycotoxins from raw coffee beans followed by LC-ESI-MS/MS determination. Before extraction, the raw coffee samples were milled and then slurried with water. The slurried samples were spiked with two separate standard solutions, one containing 131 pesticides and a second with 35 mycotoxins, which were divided into 3 groups of different relative concentration levels. Optimization of the QuEChERS approach included performance tests with acetonitrile acidified with acetic acid or formic acid, with or without buffer and with or without clean-up of the extracts before LC-ESI-MS/MS analysis. For the clean-up step, seven d-SPE sorbents and their various mixtures were evaluated. After method optimization a complete validation study was carried out to ensure adequate performance of the extraction and chromatographic methods. The samples were spiked at 3 concentrations levels with both mycotoxins and pesticides (with 6 replicates at each level, n = 6) and then submitted to the extraction procedure. Before LC-ESI-MS/MS analysis, the acetonitrile extracts were diluted 2-fold with methanol, in order to improve the chromatographic performance of the early-eluting polar analytes. Calibration standard solutions were prepared in organic solvent and in blank coffee extract at 7 concentration levels and analyzed 6 times each. The method was assessed for accuracy (recovery %), precision (RSD%), selectivity, linearity (r 2 ), limit of quantification (LOQ) and matrix effects (%). Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mazoochi, M.; Pourmina, M. A.; Bakhshi, H.
2015-03-01
The core aim of this work is the maximization of the achievable data rate of the secondary user pairs (SU pairs), while ensuring the QoS of primary users (PUs). All users are assumed to be equipped with multiple antennas. It is assumed that when PUs are present, the direct communications between SU pairs introduces intolerable interference to PUs and thereby SUs transmit signal using the cooperation of other SUs and avoid transmitting in the direct channel. In brief, an adaptive cooperative strategy for multiple-input/multiple-output (MIMO) cognitive radio networks is proposed. At the presence of PUs, the issue of joint relay selection and power allocation in Underlay MIMO Cooperative Cognitive Radio Networks (U-MIMO-CCRN) is addressed. The optimal approach for determining the power allocation and the cooperating SU is proposed. Besides, the outage probability of the proposed communication protocol is further derived. Due to high complexity of the optimal approach, a low-complexity approach is further proposed and its performance is evaluated using simulations. The simulation results reveal that the performance loss due to the low-complexity approach is only about 14%, while the complexity is greatly reduced.
NASA Astrophysics Data System (ADS)
Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu
2016-04-01
National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Battery Lifetime Analysis and Simulation Tool (BLAST) Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer, J.
2014-12-01
The deployment and use of lithium-ion (Li-ion) batteries in automotive and stationary energy storage applications must be optimized to justify their high up-front costs. Given that batteries degrade with use and storage, such optimizations must evaluate many years of operation. As the degradation mechanisms are sensitive to temperature, state-of-charge (SOC) histories, current levels, and cycle depth and frequency, it is important to model both the battery and the application to a high level of detail to ensure battery response is accurately predicted. To address these issues, the National Renewable Energy Laboratory (NREL) has developed the Battery Lifetime Analysis and Simulationmore » Tool (BLAST) suite. This suite of tools pairs NREL’s high-fidelity battery degradation model with a battery electrical and thermal performance model, application-specific electrical and thermal performance models of the larger system (e.g., an electric vehicle), application-specific system use data (e.g., vehicle travel patterns and driving data), and historic climate data from cities across the United States. This provides highly realistic long-term predictions of battery response and thereby enables quantitative comparisons of varied battery use strategies.« less
KiwiSpec - an advanced spectrograph for high resolution spectroscopy: optical design and variations
NASA Astrophysics Data System (ADS)
Barnes, Stuart I.; Gibson, Steve; Nield, Kathryn; Cochrane, Dave
2012-09-01
The KiwiSpec R4-100 is an advanced high resolution spectrograph developed by KiwiStar Optics, Industrial Research Ltd, New Zealand. The instrument is based around an R4 echelle grating and a 100mm collimated beam diameter. The optical design employs a highly asymmetric white pupil design, whereby the transfer collimator has a focal length only 1/3 that of the primary collimator. This allows the cross-dispersers (VPH gratings) and camera optics to be small and low cost while also ensuring a very compact instrument. The KiwiSpec instrument will be bre-fed and is designed to be contained in both thermal and/or vacuum enclosures. The instrument concept is highly exible in order to ensure that the same basic design can be used for a wide variety of science cases. Options include the possibility of splitting the wavelength coverage into 2 to 4 separate channels allowing each channel to be highly optimized for maximum eciency. CCDs ranging from smaller than 2K2K to larger than 4K4K can be accommodated. This allows good (3-4 pixel) sampling of resolving powers ranging from below 50,000 to greater than 100,000. Among the specic design options presented here will be a two-channel concept optimized for precision radial velocities, and a four-channel concept developed for the Gemini High- Resolution Optical Spectrograph (GHOST). The design and performance of a single-channel prototype will be presented elsewhere in these proceedings.
How could health information exchange better meet the needs of care practitioners?
Kierkegaard, P; Kaushal, R; Vest, J R
2014-01-01
Health information exchange (HIE) has the potential to improve the quality of healthcare by enabling providers with better access to patient information from multiple sources at the point of care. However, HIE efforts have historically been difficult to establish in the US and the failure rates of organizations created to foster HIE have been high. We sought to better understand how RHIO-based HIE systems were used in practice and the challenges care practitioners face using them. The objective of our study were to so investigate how HIE can better meet the needs of care practitioners. We performed a multiple-case study using qualitative methods in three communities in New York State. We conducted interviews onsite and by telephone with HIE users and non-users and observed the workflows of healthcare professionals at multiple healthcare organizations participating in a local HIE effort in New York State. The empirical data analysis suggests that challenges still remain in increasing provider usage, optimizing HIE implementations and connecting HIE systems across geographic regions. Important determinants of system usage and perceived value includes users experienced level of available information and the fit of use for physician workflows. Challenges still remain in increasing provider adoption, optimizing HIE implementations, and demonstrating value. The inability to find information reduced usage of HIE. Healthcare organizations, HIE facilitating organizations, and states can help support HIE adoption by ensuring patient information is accessible to providers through increasing patient consents, fostering broader participation, and by ensuring systems are usable.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2017-09-01
Wind profilers have been widely adopted to observe the wind field information in the atmosphere for different purposes. But accuracy of its observation has limitations due to various noises or disturbances and hence need to be further improved. In this paper, the data measured under strong wind conditions, using a 1290-MHz boundary layer profiler (BLP), are quality controlled via a composite quality control (QC) procedure proposed by the authors. Then, through the comparison with the data measured by radiosonde flights (balloon observations), the critical thresholds in the composite QC procedure, including consensus average threshold T 1 and vertical shear threshold T 3 , are systematically discussed. And the performance of the BLP operated under precipitation is also evaluated. It is found that to ensure the high accuracy and high data collectable rate, the optimal range of subsets is determined to be 4 m/s. Although the number of data rejected by the combined algorithm of vertical shear examination and small median test is quite limited, it is proved that the algorithm is quite useful to recognize the outlier with a large discrepancy. And the optimal wind shear threshold T 3 can be recommended as 5 ms -1 /100m. During patchy precipitation, the quality of data measured by the four oblique beams (using the DBS measuring technique) can still be ensured. After the BLP data are quality controlled by the composite QC procedure, the output can show good agreement with the balloon observation.
Petruzziello, Filomena; Grand-Guillaume Perrenoud, Alexandre; Thorimbert, Anita; Fogwill, Michael; Rezzi, Serge
2017-07-18
Analytical solutions enabling the quantification of circulating levels of liposoluble micronutrients such as vitamins and carotenoids are currently limited to either single or a reduced panel of analytes. The requirement to use multiple approaches hampers the investigation of the biological variability on a large number of samples in a time and cost efficient manner. With the goal to develop high-throughput and robust quantitative methods for the profiling of micronutrients in human plasma, we introduce a novel, validated workflow for the determination of 14 fat-soluble vitamins and carotenoids in a single run. Automated supported liquid extraction was optimized and implemented to simultaneously parallelize 48 samples in 1 h, and the analytes were measured using ultrahigh-performance supercritical fluid chromatography coupled to tandem mass spectrometry in less than 8 min. An improved mass spectrometry interface hardware was built up to minimize the post-decompression volume and to allow better control of the chromatographic effluent density on its route toward and into the ion source. In addition, a specific make-up solvent condition was developed to ensure both analytes and matrix constituents solubility after mobile phase decompression. The optimized interface resulted in improved spray plume stability and conserved matrix compounds solubility leading to enhanced hyphenation robustness while ensuring both suitable analytical repeatability and improved the detection sensitivity. The overall developed methodology gives recoveries within 85-115%, as well as within and between-day coefficient of variation of 2 and 14%, respectively.
A Kinematic Calibration Process for Flight Robotic Arms
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.
What principles should govern the use of managed entry agreements?
Klemp, Marianne; Frønsdal, Katrine B; Facey, Karen
2011-01-01
To ensure rapid access to new potentially beneficial health technologies, obtain best value for money, and ensure affordability, healthcare payers are adopting a range of innovative reimbursement approaches that may be called Managed Entry Agreements (MEAs). The Health Technology Assessment International (HTAi) Policy Forum sought to identify why MEAs might be used, issues associated with implementation and develop principles for their use. A 2-day deliberative workshop discussed key papers, members' experiences, and collectively addressed four policy questions that resulted in this study. MEAs are used to give access to new technologies where traditional reimbursement is deemed inappropriate. Three different forms of MEAs have been identified: management of budget impact, management of uncertainty relating to clinical and/or cost-effectiveness, and management of utilization to optimize performance. The rationale for using these approaches and their advantages and disadvantages differ. However, all forms of MEA should take the form of a formal written agreement among stakeholders, clearly identifying the rationale for the agreement, aspects to be assessed, methods of data collection and review, and the criteria for ending the agreement. MEAs should only be used when HTA identifies issues or concerns about key outcomes and/or costs and/or organizational/budget impacts that are material to a reimbursement decision. They provide patient access and can be useful to manage technology diffusion and optimize use. However, they are administratively complex and may be difficult to negotiate and their effectiveness has yet to be evaluated.
Palumbo, Biagio; Del Re, Francesco; Martorelli, Massimo; Lanzotti, Antonio; Corrado, Pasquale
2017-02-08
A statistical approach for the characterization of Additive Manufacturing (AM) processes is presented in this paper. Design of Experiments (DOE) and ANalysis of VAriance (ANOVA), both based on Nested Effects Modeling (NEM) technique, are adopted to assess the effect of different laser exposure strategies on physical and mechanical properties of AlSi10Mg parts produced by Direct Metal Laser Sintering (DMLS). Due to the wide industrial interest in AM technologies in many different fields, it is extremely important to ensure high parts performances and productivity. For this aim, the present paper focuses on the evaluation of tensile properties of specimens built with different laser exposure strategies. Two optimal laser parameters settings, in terms of both process quality (part performances) and productivity (part build rate), are identified.
Palumbo, Biagio; Del Re, Francesco; Martorelli, Massimo; Lanzotti, Antonio; Corrado, Pasquale
2017-01-01
A statistical approach for the characterization of Additive Manufacturing (AM) processes is presented in this paper. Design of Experiments (DOE) and ANalysis of VAriance (ANOVA), both based on Nested Effects Modeling (NEM) technique, are adopted to assess the effect of different laser exposure strategies on physical and mechanical properties of AlSi10Mg parts produced by Direct Metal Laser Sintering (DMLS). Due to the wide industrial interest in AM technologies in many different fields, it is extremely important to ensure high parts performances and productivity. For this aim, the present paper focuses on the evaluation of tensile properties of specimens built with different laser exposure strategies. Two optimal laser parameters settings, in terms of both process quality (part performances) and productivity (part build rate), are identified. PMID:28772505
Numerical studies on the performance of a flow distributor in tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Soo Jai, E-mail: shinsoojai@kaeri.re.kr; Kim, Young In; Ryu, Seungyeob
2015-03-10
Flow distributors are generally observed in several nuclear power plants. During core make-up tank (CMT) injection into the reactor, the condensation and thermal stratification are observed in the CMT, and rapid condensation disturbs the injection operation. To reduce the condensation phenomena in the tank, CMT was equipped with a flow distributor. The optimal design of the flow distributor is very important to ensure the structural integrity the CMT and its safe operation during certain transient or accident conditions. In the present study, we numerically investigated the performance of a flow distributor in tank with different shape factors such as themore » total number of holes, pitch-to-hole diameter ratios, diameter of the hole, and the area ratios. These data will contribute to a design of the flow distributor.« less
A subscale facility for liquid rocket propulsion diagnostics at Stennis Space Center
NASA Technical Reports Server (NTRS)
Raines, N. G.; Bircher, F. E.; Chenevert, D. J.
1991-01-01
The Diagnostics Testbed Facility (DTF) at NASA's John C. Stennis Space Center in Mississippi was designed to provide a testbed for the development of rocket engine exhaust plume diagnostics instrumentation. A 1200-lb thrust liquid oxygen/gaseous hydrogen thruster is used as the plume source for experimentation and instrument development. Theoretical comparative studies have been performed with aerothermodynamic codes to ensure that the DTF thruster (DTFT) has been optimized to produce a plume with pressure and temperature conditions as much like the plume of the Space Shuttle Main Engine as possible. Operation of the DTFT is controlled by an icon-driven software program using a series of soft switches. Data acquisition is performed using the same software program. A number of plume diagnostics experiments have utilized the unique capabilities of the DTF.
NASA Astrophysics Data System (ADS)
Zhu, Wenmin; Jia, Yuanhua
2018-01-01
Based on the risk management theory and the PDCA cycle model, requirements of the railway passenger transport safety production is analyzed, and the establishment of the security risk assessment team is proposed to manage risk by FTA with Delphi from both qualitative and quantitative aspects. The safety production committee is also established to accomplish performance appraisal, which is for further ensuring the correctness of risk management results, optimizing the safety management business processes and improving risk management capabilities. The basic framework and risk information database of risk management information system of railway passenger transport safety are designed by Ajax, Web Services and SQL technologies. The system realizes functions about risk management, performance appraisal and data management, and provides an efficient and convenient information management platform for railway passenger safety manager.
2015-11-01
health readiness by ensuring the Total Force has the required physical , emotional, and cog- nitive health and fitness to win in environments that are...overall installation score for optimal physical activity as assessed by Body Mass Index (BMI), moderate or vigorous activity levels, resistance...activity and nutrition (SAN) are critical for achieving optimal physical , mental, and emotion-al health and wellbeing. They are integral to max
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox
NASA Astrophysics Data System (ADS)
Li, R. N.; Liu, X.; Liu, S. J.
2013-12-01
In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.
Kanesa-thasan, Niranjan; Shaw, Alan; Stoddard, Jeffrey J; Vernon, Thomas M
2011-05-01
Vaccine safety is increasingly a focus for the general public, health care providers, and vaccine manufacturers, because the efficacy of licensed vaccines is accepted as a given. Commitment to ensuring safety of all vaccines, including childhood vaccines, is addressed by the federal government, academia, and industry. Safety activities conducted by the vaccine research, development, and manufacturing companies occur at all stages of product development, from selection and formulation of candidate vaccines through postlicensure studies and surveillance of adverse-event reports. The contributions of multiple interacting functional groups are required to execute these tasks through the life cycle of a product. We describe here the safeguards used by vaccine manufacturers, including specific examples drawn from recent experience, and highlight some of the current challenges. Vaccine-risk communication becomes a critical area for partnership of vaccine companies with government, professional associations, and nonprofit advocacy groups to provide information on both benefits and risks of vaccines. The crucial role of the vaccine companies in ensuring the optimal vaccine-safety profile, often overlooked, will continue to grow with this dynamic arena.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
Good Governance Matters: Optimizing U.S. PRTs in Afghanistan to Advance Good Governance
2009-04-01
a new school. Partnering is critical to ensure the synchronized application of aid. In this case ensuring that teachers ...traces the origins of the U.S. Afghanistan PRT Model from its inception in 2002 to today. The paper will take a critical look at all the three lines...Captain, U.S. Navy A paper submitted to the Faculty of the Joint Advanced Warfighting School in partial satisfaction of the
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fong, Erika J.; Huang, Chao; Hamilton, Julie
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
NASA Astrophysics Data System (ADS)
Meng, Fei; Tao, Gang; Zhang, Tao; Hu, Yihuai; Geng, Peng
2015-08-01
Shifting quality is a crucial factor in all parts of the automobile industry. To ensure an optimal gear shifting strategy with best fuel economy for a stepped automatic transmission, the controller should be designed to meet the challenge of lacking of a feedback sensor to measure the relevant variables. This paper focuses on a new kind of automatic transmission using proportional solenoid valve to control the clutch pressure, a speed difference of the clutch based control strategy is designed for the shift control during the inertia phase. First, the mechanical system is shown and the system dynamic model is built. Second, the control strategy is designed based on the characterization analysis of models which are derived from dynamics of the drive line and electro-hydraulic actuator. Then, the controller uses conventional Proportional-Integral-Derivative control theory, and a robust two-degree-of-freedom controller is also carried out to determine the optimal control parameters to further improve the system performance. Finally, the designed control strategy with different controller is implemented on a simulation model. The compared results show that the speed difference of clutch can track the desired trajectory well and improve the shift quality effectively.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Modeling and control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.
1988-01-01
This monograph presents integrated modeling and controller design methods for flexible structures. The controllers, or compensators, developed are optimal in the linear-quadratic-Gaussian sense. The performance objectives, sensor and actuator locations and external disturbances influence both the construction of the model and the design of the finite dimensional compensator. The modeling and controller design procedures are carried out in parallel to ensure compatibility of these two aspects of the design problem. Model reduction techniques are introduced to keep both the model order and the controller order as small as possible. A linear distributed, or infinite dimensional, model is the theoretical basis for most of the text, but finite dimensional models arising from both lumped-mass and finite element approximations also play an important role. A central purpose of the approach here is to approximate an optimal infinite dimensional controller with an implementable finite dimensional compensator. Both convergence theory and numerical approximation methods are given. Simple examples are used to illustrate the theory.
Wind turbine power tracking using an improved multimodel quadratic approach.
Khezami, Nadhira; Benhadj Braiek, Naceur; Guillaud, Xavier
2010-07-01
In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Caldeira, Letícia Gomes Magnago; Santos, Flávio Alves; de Oliveira, Andréa Melo Garcia; Lima, Josefa Abucater; de Souza, Leonardo Francisco; da Silva, Guilherme Resende; de Assis, Débora Cristina Sampaio
2017-01-01
A multiresidue method by UHPLC/MS-MS was optimized and validated for the screening and semiquantitative detection of antimicrobials residues from tetracyclines, aminoglycosides, quinolones, lincosamides, β-lactams, sulfonamides, and macrolides families in eggs. A qualitative approach was used to ensure adequate sensitivity to detect residues at the level of interest, defined as maximum residue limit (MRL), or less. The applicability of the methods was assessed by analyzing egg samples from hens that had been subjected to pharmacological treatment with neomycin, enrofloxacin, lincomycin, oxytetracycline, and doxycycline during five days and after discontinuation of medication (10 days). The method was adequate for screening all studied analytes in eggs, since the performance parameters ensured a false-compliant rate below or equal to 5%, except for flumequine. In the analyses of eggs from laying hens subjected to pharmacological treatment, all antimicrobial residues were detected throughout the experimental period, even after discontinuation of medication, except for neomycin, demonstrating the applicability of the method for analyses of antimicrobial residues in eggs. PMID:29181222
Purchasing and Selecting School Lighting.
ERIC Educational Resources Information Center
Berman, Tim
2003-01-01
Discusses factors for schools to consider when deciding on a lighting system: purchase price, installation charges, maintenance costs, energy costs, and ensuring optimal educational environment. Presents best practices in these areas. (EV)
Persson, Oliver; Andersson, Niklas; Nilsson, Bernt
2018-01-05
Preparative liquid chromatography is a separation technique widely used in the manufacturing of fine chemicals and pharmaceuticals. A major drawback of traditional single-column batch chromatography step is the trade-off between product purity and process performance. Recirculation of impure product can be utilized to make the trade-off more favorable. The aim of the present study was to investigate the usage of a two-column batch-to-batch recirculation process step to increase the performance compared to single-column batch chromatography at a high purity requirement. The separation of a ternary protein mixture on ion-exchange chromatography columns was used to evaluate the proposed process. The investigation used modelling and simulation of the process step, experimental validation and optimization of the simulated process. In the presented case the yield increases from 45.4% to 93.6% and the productivity increases 3.4 times compared to the performance of a batch run for a nominal case. A rapid concentration build-up product can be seen during the first cycles, before the process reaches a cyclic steady-state with reoccurring concentration profiles. The optimization of the simulation model predicts that the recirculated salt can be used as a flying start of the elution, which would enhance the process performance. The proposed process is more complex than a batch process, but may improve the separation performance, especially while operating at cyclic steady-state. The recirculation of impure fractions reduces the product losses and ensures separation of product to a high degree of purity. Copyright © 2017 Elsevier B.V. All rights reserved.
Meaney, Peter A; Bobrow, Bentley J; Mancini, Mary E; Christenson, Jim; de Caen, Allan R; Bhanji, Farhan; Abella, Benjamin S; Kleinman, Monica E; Edelson, Dana P; Berg, Robert A; Aufderheide, Tom P; Menon, Venu; Leary, Marion
2013-07-23
The "2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care" increased the focus on methods to ensure that high-quality cardiopulmonary resuscitation (CPR) is performed in all resuscitation attempts. There are 5 critical components of high-quality CPR: minimize interruptions in chest compressions, provide compressions of adequate rate and depth, avoid leaning between compressions, and avoid excessive ventilation. Although it is clear that high-quality CPR is the primary component in influencing survival from cardiac arrest, there is considerable variation in monitoring, implementation, and quality improvement. As such, CPR quality varies widely between systems and locations. Victims often do not receive high-quality CPR because of provider ambiguity in prioritization of resuscitative efforts during an arrest. This ambiguity also impedes the development of optimal systems of care to increase survival from cardiac arrest. This consensus statement addresses the following key areas of CPR quality for the trained rescuer: metrics of CPR performance; monitoring, feedback, and integration of the patient's response to CPR; team-level logistics to ensure performance of high-quality CPR; and continuous quality improvement on provider, team, and systems levels. Clear definitions of metrics and methods to consistently deliver and improve the quality of CPR will narrow the gap between resuscitation science and the victims, both in and out of the hospital, and lay the foundation for further improvements in the future.
Etching and oxidation of InAs in planar inductively coupled plasma
NASA Astrophysics Data System (ADS)
Dultsev, F. N.; Kesler, V. G.
2009-10-01
The surface of InAs (1 1 1)A was investigated under plasmachemical etching in the gas mixture CH 4/H 2/Ar. Etching was performed using the RF (13.56 MHz) and ICP plasma with the power 30-150 and 50-300 W, respectively; gas pressure in the reactor was 3-10 mTorr. It was demonstrated that the composition of the subsurface layer less than 5 nm thick changes during plasmachemical etching. A method of deep etching of InAs involving ICP plasma and hydrocarbon based chemistry providing the conservation of the surface relief is proposed. Optimal conditions and the composition of the gas phase for plasmachemical etching ensuring acceptable etch rates were selected.
NASA Technical Reports Server (NTRS)
Cole, Richard
1991-01-01
The major goals of this effort are as follows: (1) to examine technology insertion options to optimize Advanced Information Processing System (AIPS) performance in the Advanced Launch System (ALS) environment; (2) to examine the AIPS concepts to ensure that valuable new technologies are not excluded from the AIPS/ALS implementations; (3) to examine advanced microprocessors applicable to AIPS/ALS, (4) to examine radiation hardening technologies applicable to AIPS/ALS; (5) to reach conclusions on AIPS hardware building blocks implementation technologies; and (6) reach conclusions on appropriate architectural improvements. The hardware building blocks are the Fault-Tolerant Processor, the Input/Output Sequencers (IOS), and the Intercomputer Interface Sequencers (ICIS).
VANET Clustering Based Routing Protocol Suitable for Deserts.
Nasr, Mohammed Mohsen Mohammed; Abdelgader, Abdeldime Mohamed Salih; Wang, Zhi-Gong; Shen, Lian-Feng
2016-04-06
In recent years, there has emerged applications of vehicular ad hoc networks (VANETs) towards security, safety, rescue, exploration, military and communication redundancy systems in non-populated areas, besides its ordinary use in urban environments as an essential part of intelligent transportation systems (ITS). This paper proposes a novel algorithm for the process of organizing a cluster structure and cluster head election (CHE) suitable for VANETs. Moreover, it presents a robust clustering-based routing protocol, which is appropriate for deserts and can achieve high communication efficiency, ensuring reliable information delivery and optimal exploitation of the equipment on each vehicle. A comprehensive simulation is conducted to evaluate the performance of the proposed CHE and routing algorithms.
VANET Clustering Based Routing Protocol Suitable for Deserts
Mohammed Nasr, Mohammed Mohsen; Abdelgader, Abdeldime Mohamed Salih; Wang, Zhi-Gong; Shen, Lian-Feng
2016-01-01
In recent years, there has emerged applications of vehicular ad hoc networks (VANETs) towards security, safety, rescue, exploration, military and communication redundancy systems in non-populated areas, besides its ordinary use in urban environments as an essential part of intelligent transportation systems (ITS). This paper proposes a novel algorithm for the process of organizing a cluster structure and cluster head election (CHE) suitable for VANETs. Moreover, it presents a robust clustering-based routing protocol, which is appropriate for deserts and can achieve high communication efficiency, ensuring reliable information delivery and optimal exploitation of the equipment on each vehicle. A comprehensive simulation is conducted to evaluate the performance of the proposed CHE and routing algorithms. PMID:27058539
Anesthesiology and gastroenterology.
de Villiers, Willem J S
2009-03-01
A successful population-based colorectal cancer screening requires efficient colonoscopy practices that incorporate high throughput, safety, and patient satisfaction. There are several different modalities of nonanesthesiologist-administered sedation currently available and in development that may fulfill these requirements. Modern-day gastroenterology endoscopic procedures are complex and demand the full attention of the attending gastroenterologist and the complete cooperation of the patient. Many of these procedures will also require the anesthesiologist's knowledge, skills, abilities, and experience to ensure optimal procedure results and good patient outcomes. The goal of this review is (1) to provide a gastroenterology perspective on the use of propofol in gastroenterology endoscopic practice, and (2) to describe newer GI endoscopy procedures that gastroenterologists perform that might involve anesthesiologists.
Heat transfer and phase transitions of water in multi-layer cryolithozone-surface systems
NASA Astrophysics Data System (ADS)
Khabibullin, I. L.; Nigametyanova, G. A.; Nazmutdinov, F. F.
2018-01-01
A mathematical model for calculating the distribution of temperature and the dynamics of the phase transfor-mations of water in multilayer systems on permafrost-zone surface is proposed. The model allows one to perform calculations in the annual cycle, taking into account the distribution of temperature on the surface in warm and cold seasons. A system involving four layers, a snow or land cover, a top layer of soil, a layer of thermal-insulation materi-al, and a mineral soil, is analyzed. The calculations by the model allow one to choose the optimal thickness and com-position of the layers which would ensure the stability of structures built on the permafrost-zone surface.
A new leadership role for pharmacists: a prescription for change.
Burgess, L Hayley; Cohen, Michael R; Denham, Charles R
2010-03-01
Pharmacists can play an important role as leaders to reduce patient safety risks, optimize the safe function of medication management systems, and align pharmacy services with national initiatives that measure and reward quality performance. The objective of this article is to determine the actions that pharmacists can take to create a visible and sustainable safe medication management structure and system in the health care environment. An evidence-based literature search was performed to determine what actions successful pharmacist leaders have taken to improve patient safety. There is a growing number of quality and patient safety standards, as well as measures that focus specifically on medication use and education. Health care organizations must be made aware of the valuable resources that pharmacists provide and of the complexity of medication management. There are steps that pharmacist leaders can take to achieve these goals. The 10 steps that pharmacist leaders can take to create a visible and sustainable safe medication management structure and system are the following: 1. Identify and mitigate medication management risks and hazards to reduce preventable patient harm. 2. Establish pharmacy leadership structures and systems to ensure organizational awareness of medication safety gaps. 3. Support an organizational culture of safe medication use. 4. Ensure evidence-based medication regimens for all patients. 5. Have daily check-in calls/meetings, with the primary focus on significant safety or quality issues. 6. Establish a medication safety committee. 7. Perform medication safety walk-rounds to evaluate medication processes, and request front-line staff ’s input about medication safe practices. 8. Ensure that pharmacy staff engage in teamwork, skill building, and communication training. 9. Engage in readiness planning for implementation of health information technology (HIT). 10. Include medication history-taking and reviews upon entry into the organization; medication counseling and training during the discharge process; and follow-up after the transition to home.
Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong
2014-01-01
The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656
Real-Time Optimization of Distribution Grids for Increased Flexibility and
ensure a stable system operation. Now let's go a little bit to the math, because there are some technical math. This one looks very complicated, but it's actually very simple, because, for example, you take stability and optimality. However, I'm not going to delve into the math. I'm going to move to some test
Optimizing nursing care by integrating theory-driven evidence-based practice.
Pipe, Teri Britt
2007-01-01
An emerging challenge for nursing leadership is how to convey the importance of both evidence-based practice (EBP) and theory-driven care in ensuring patient safety and optimizing outcomes. This article describes a specific example of a leadership strategy based on Rosswurm and Larrabee's model for change to EBP, which was effective in aligning the processes of EBP and theory-driven care.
Nováková, Lucie; Grand-Guillaume Perrenoud, Alexandre; Nicoli, Raul; Saugy, Martial; Veuthey, Jean-Luc; Guillarme, Davy
2015-01-01
The conditions for the analysis of selected doping substances by UHPSFC-MS/MS were optimized to ensure suitable peak shapes and maximized MS responses. A representative mixture of 31 acidic and basic doping agents was analyzed, in both ESI+ and ESI- modes. The best compromise for all compounds in terms of MS sensitivity and chromatographic performance was obtained when adding 2% water and 10mM ammonium formate in the CO2/MeOH mobile phase. Beside mobile phase, the nature of the make-up solvent added for interfacing UHPSFC with MS was also evaluated. Ethanol was found to be the best candidate as it was able to compensate for the negative effect of 2% water addition in ESI- mode and provided a suitable MS response for all doping agents. Sensitivity of the optimized UHPSFC-MS/MS method was finally assessed and compared to the results obtained in conventional UHPLC-MS/MS. Sensitivity was improved by 5-100-fold in UHPSFC-MS/MS vs. UHPLC-MS/MS for 56% of compounds, while only one compound (bumetanide) offered a significantly higher MS response (4-fold) under UHPLC-MS/MS conditions. In the second paper of this series, the optimal conditions for UHPSFC-MS/MS analysis will be employed to screen >100 doping agents in urine matrix and results will be compared to those obtained by conventional UHPLC-MS/MS. Copyright © 2014 Elsevier B.V. All rights reserved.
Protocol for vital dye staining of corneal endothelial cells.
Park, Sunju; Fong, Alan G; Cho, Hyung; Zhang, Cheng; Gritz, David C; Mian, Gibran; Herzlich, Alexandra A; Gore, Patrick; Morganti, Ashley; Chuck, Roy S
2012-12-01
To describe a step-by-step methodology to establish a reproducible staining protocol for the evaluation of human corneal endothelial cells. Four procedures were performed to determine the best protocol. (1) To determine the optimal trypan blue staining method, goat corneas were stained with 4 dilutions of trypan blue (0.4%, 0.2%, 0.1%, and 0.05%) and 1% alizarin red. (2) To determine the optimal alizarin red staining method, goat corneas were stained with 2 dilutions of alizarin red (1% and 0.5%) and 0.2% trypan blue. (3) To ensure that trypan blue truly stains damaged cells, goat corneas were exposed to either 3% hydrogen peroxide or to balanced salt solution, and then stained with 0.2% trypan blue and 0.5% alizarin red. (4) Finally, fresh human corneal buttons were examined; 1 group was stained with 0.2% trypan blue and another group with 0.4% trypan blue. For the 4 procedures performed, the results are as follows: (1) trypan blue staining was not observed in any of the normal corneal samples; (2) 0.5% alizarin red demonstrated sharper cell borders than 1% alizarin red; (3) positive trypan blue staining was observed in the hydrogen peroxide exposed tissue in damaged areas; (4) 0.4% trypan blue showed more distinct positive staining than 0.2% trypan blue. We were able to determine the optimal vital dye staining conditions for human corneal endothelial cells using 0.4% trypan blue and 0.5% alizarin red.
Characteristic optimization of 1.55-μm InGaAsP/InP high-power diode laser
NASA Astrophysics Data System (ADS)
Ke, Qing; Tan, Shaoyang; Zhai, Teng; Zhang, Ruikang; Lu, Dan; Ji, Chen
2014-11-01
A comprehensive design optimization of 1.55-μm high power InGaAsP/InP board area lasers is performed aiming at increasing the internal quantum efficiency (IQE) while maintaing a low internal loss of the device as well. The P-doping profile and separate confinement heterostructure (SCH) layer band gap are optimized respectively with commercial software Crosslight. Analysis of lasers with different p-doping profiles shows that, although heavy doping in P-cladding layer increases the internal loss of the device, it ensures a high IQE because higher energy barrier at the SCH/P-cladding interface as a result of heavy doping helps reduce the carrier leakage from the waveguide to the InP-cladding layer. The band gap of the SCH layer are also optimized for high slope efficiency. Smaller band gap helps reduce the vertical carrier leakage from the waveguide to the P-cladding layer, but the corresponding higher carrier concentration in SCH layer will cause some radiative recombination, thus influencing the IQE. And as the injection current increases, the carrier concentration increases faster with smaller band gap, therefore, the output power saturates sooner. An optimized band gap in SCH layer of approximately 1.127eV and heavy doping up to 1e18/cm3 at the SCH/P-cladding interface are identified for our high power laser design, and we achieved a high IQE of 94% and internal loss of 2.99/cm for our design.
Optimizing performance of hybrid FSO/RF networks in realistic dynamic scenarios
NASA Astrophysics Data System (ADS)
Llorca, Jaime; Desai, Aniket; Baskaran, Eswaran; Milner, Stuart; Davis, Christopher
2005-08-01
Hybrid Free Space Optical (FSO) and Radio Frequency (RF) networks promise highly available wireless broadband connectivity and quality of service (QoS), particularly suitable for emerging network applications involving extremely high data rate transmissions such as high quality video-on-demand and real-time surveillance. FSO links are prone to atmospheric obscuration (fog, clouds, snow, etc) and are difficult to align over long distances due the use of narrow laser beams and the effect of atmospheric turbulence. These problems can be mitigated by using adjunct directional RF links, which provide backup connectivity. In this paper, methodologies for modeling and simulation of hybrid FSO/RF networks are described. Individual link propagation models are derived using scattering theory, as well as experimental measurements. MATLAB is used to generate realistic atmospheric obscuration scenarios, including moving cloud layers at different altitudes. These scenarios are then imported into a network simulator (OPNET) to emulate mobile hybrid FSO/RF networks. This framework allows accurate analysis of the effects of node mobility, atmospheric obscuration and traffic demands on network performance, and precise evaluation of topology reconfiguration algorithms as they react to dynamic changes in the network. Results show how topology reconfiguration algorithms, together with enhancements to TCP/IP protocols which reduce the network response time, enable the network to rapidly detect and act upon link state changes in highly dynamic environments, ensuring optimized network performance and availability.
Martinho, Graça; Gomes, Ana; Santos, Pedro; Ramos, Mário; Cardoso, João; Silveira, Ana; Pires, Ana
2017-03-01
The need to increase packaging recycling rates has led to the study and analysis of recycling schemes from various perspectives, including technical, economic, social, and environmental. This paper is part one of a three-part study devoted to comparing two recyclable packaging waste collection systems operating in western Portugal: a mixed collection system, where curbside and drop-off collections are operated simultaneously (but where the curbside system was introduced after the drop-off system), and an exclusive drop-off system. This part of the study focuses on analyzing the operation and performance of the two waste collection systems. The mixed collection system is shown to yield higher material separation rates, higher recycling rates, and lower contamination rates compared with the exclusive drop-off system, a result of the curbside component in the former system. However, the operational efficiency of the curbside collection in the mixed system is lower than the drop-off collection in the mixed system and the exclusive drop-off system, mainly because of inefficiency of collection. A key recommendation is to ensure that the systems should be optimized in an attempt to improve performance. Optimization should be applied not only to logistical aspects but also to citizens' participation, which could be improved by conducting curbside collection awareness campaigns in the neighborhoods that have a mixed system. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
NASA Astrophysics Data System (ADS)
Rodriguez, Tony F.; Cushman, David A.
2003-06-01
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
A methodology for the synthesis of robust feedback systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Milich, David Albert
1988-01-01
A new methodology is developed for the synthesis of linear, time-variant (LTI) controllers for multivariable LTI systems. The resulting closed-loop system is nominally stable and exhibits a known level of performance. In addition, robustness of the feedback system is guaranteed, i.e., stability and performance are retained in the presence of multiple unstructured uncertainty blocks located at various points in the feedback loop. The design technique is referred to as the Causality Recovery Methodology (CRM). The CRM relies on the Youla parameterization of all stabilizing compensators to ensure nominal stability of the feedback system. A frequency-domain inequality in terms of the structured singular value mu defines the robustness specification. The optimal compensator, with respect to the mu condition, is shown to be noncausal in general. The aim of the CRM is to find a stable, causal transfer function matrix that approximates the robustness characteristics of the optimal solution. The CRM, via a series of infinite-dimensional convex programs, produces a closed-loop system whose performance robustness is at least as good as that of any initial design. The algorithm is approximated by a finite dimensional process for the purposes of implementation. Two numerical examples confirm the potential viability of the CRM concept; however, the robustness improvement comes at the expense of increased computational burden and compensator complexity.
Judging Surgical Research: How Should We Evaluate Performance and Measure Value?
Souba, Wiley W.; Wilmore, Douglas W.
2000-01-01
Objective To establish criteria to evaluate performance in surgical research, and to suggest strategies to optimize research in the future. Summary Background Data Research is an integral component of the academic mission, focusing on important clinical problems, accounting for surgical advances, and providing training and mentoring for young surgeons. With constraints on healthcare resources, there is increasing pressure to generate clinical revenues at the expense of the time and effort devoted to surgical research. An approach that would assess the value of research would allow prioritization of projects. Further, alignment of high-priority research projects with clinical goals would optimize research gains and maximize the clinical enterprise. Methods The authors reviewed performance criteria applied to industrial research and modified these criteria to apply to surgical research. They reviewed several programs that align research objectives with clinical goals. Results Performance criteria were categorized along several dimensions: internal measures (quality, productivity, innovation, learning, and development), customer satisfaction, market share, and financial indices (cost and profitability). A “report card” was proposed to allow the assessment of research in an individual department or division. Conclusions The department’s business strategy can no longer be divorced from its research strategy. Alignment between research and clinical goals will maximize the department’s objectives but will create the need to modify existing hierarchical structures and reward systems. Such alignment appears to be the best way to ensure the success of surgical research in the future. PMID:10862192
Cross-layer protocol design for QoS optimization in real-time wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2010-04-01
The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layer design accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layer designs to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layer design is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally tractable and lead to simplifications for any simulated or analytical performance evaluations of the cross-layer designs.
Barnes, Priscilla A; Curtis, Amy B; Hall-Downey, Laura; Moonesinghe, Ramal
2012-01-01
This study examines whether partnership-related measures in the second version of the National Public Health Performance Standards (NPHPS) are useful in evaluating level of activity as well as identifying latent constructs that exist among local public health systems (LPHSs). In a sample of 110 LPHSs, descriptive analysis was conducted to determine frequency and percentage of 18 partnership-related NPHPS measures. Principal components factor analysis was conducted to identify unobserved characteristics that promote effective partnerships among LPHSs. Results revealed that 13 of the 18 measures were most frequently reported at the minimal-moderate level (conducted 1%-49% of the time). Coordination of personal health and social services to optimize access (74.6%) was the most frequently reported measure at minimal-moderate levels. Optimal levels (conducted >75% of the time) were reported most frequently in 2 activities: participation in emergency preparedness coalitions and local health departments ensuring service provision by working with state health departments (67% and 61% of respondents, respectively) and the least optimally reported activity was review partnership effectiveness (4% of respondents). Factor analysis revealed categories of partnership-related measures in 4 domains: resources and activities contributing to relationship building, evaluating community leadership activities, research, and state and local linkages to support public health activities. System-oriented public health assessments may have questions that serve as proxy measures to examine levels of interorganizational partnerships. Several measures from the NPHPS were useful in establishing a national baseline of minimal and optimal activity levels as well as identifying factors to enhance the delivery of the 10 essential public health services among organizations and individuals in public health systems.
MO-FG-BRA-08: Swarm Intelligence-Based Personalized Respiratory Gating in Lung SAbR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modiri, A; Sabouri, P; Sawant, A
Purpose: Respiratory gating is widely deployed as a clinical motion-management strategy in lung radiotherapy. In conventional gating, the beam is turned on during a pre-determined phase window; typically, around end-exhalation. In this work, we challenge the notion that end-exhalation is always the optimal gating phase. Specifically, we use a swarm-intelligence-based, inverse planning approach to determine the optimal respiratory phase and MU for each beam with respect to (i) the state of the anatomy at each phase and (ii) the time spent in that state, estimated from long-term monitoring of the patient’s breathing motion. Methods: In a retrospective study of fivemore » lung cancer patients, we compared the dosimetric performance of our proposed personalized gating (PG) with that of conventional end-of-exhale gating (CEG) and a previously-developed, fully 4D-optimized plan (combined with MLC tracking delivery). For each patient, respiratory phase probabilities (indicative of the time duration of the phase) were estimated over 2 minutes from lung tumor motion traces recorded previously using the Synchrony system (Accuray Inc.). Based on this information, inverse planning optimization was performed to calculate the optimal respiratory gating phase and MU for each beam. To ensure practical deliverability, each PG beam was constrained to deliver the assigned MU over a time duration comparable to that of CEG delivery. Results: Maximum OAR sparing for the five patients achieved by the PG and the 4D plans compared to CEG plans was: Esophagus Dmax [PG:57%, 4D:37%], Heart Dmax [PG:71%, 4D:87%], Spinal cord Dmax [PG:18%, 4D:68%] and Lung V13 [PG:16%, 4D:31%]. While patients spent the most time in exhalation, the PG-optimization chose end-exhale only for 28% of beams. Conclusion: Our novel gating strategy achieved significant dosimetric improvements over conventional gating, and approached the upper limit represented by fully 4D optimized planning while being significantly simpler and more clinically translatable. This work was partially supported through research funding from National Institutes of Health (R01CA169102) and Varian Medical Systems, Palo Alto, CA, USA.« less
Hustrini, Ni Made; Siregar, Parlindungan; Nainggolan, Ginova; Harimurti, Kuntjoro
2017-04-01
optimal hydration represents adequate total daily fluid intake to compensate for daily water losses, ensure adequate urine output to reduce the risk of urolithiasis and renal function decline, and also avoid the production of arginine vasopressin (AVP). Twenty-four-hour urine osmolality has been used to assess hydration status, but it is challenging because of the possibility of spilling urine and limitation of daily activities. This study is aimed to determine the performance of the afternoon urine osmolality to assess the optimal hydration status compared with 24-hour urine osmolality. a cross sectional study was conducted on healthy employees aged 18-59 years at Universitas Indonesia Medical Faculty/Cipto Mangunkusumo Hospital, with consecutive sampling method. The ROC curve was analyzed to obtain the optimal cut off point and the accuracy of the afternoon urine osmolality in assessing the optimal hydration status. between August-September 2016 there were 120 subjects (73.8% female, median age 32 years) who met the study criteria with a median 24-hour urine osmolality 463.5 (95% CI, 136-1427) mOsm/kg H2O and median afternoon urine osmolality 513 (95% CI, 73-1267). We found moderate correlation (r=0.59; p<0.001) between afternoon urine osmolality and a 24-hour urine osmolality. Using ROC curve, the AUC value was 0.792 (95% CI, 0.708-0.875) with the cut off 528 mOsm/kg H2O. To assess the optimal hydration status, the afternoon urine osmolality had the sensitivity of 0.7 (95% CI, 0.585-0.795) and the specificity of 0.76 (95% CI, 0.626-0.857), Likelihood Ratio (LR) (+) 2.917 (95% CI, 1.74-4.889) and LR (-) 0.395 (95% CI, 0.267-0.583). afternoon urine osmolality can be used as a diagnostic tool to assess the optimal hydration status in healthy population with cut off 528 mOsm/kg H2O, sensitivity of 0.7, and specificity of 0.76.
NASA Astrophysics Data System (ADS)
Budilova, E. V.; Terekhin, A. T.; Chepurnov, S. A.
1994-09-01
A hypothetical neural scheme is proposed that ensures efficient decision making by an animal searching for food in a maze. Only the general structure of the network is fixed; its quantitative characteristics are found by numerical optimization that simulates the process of natural selection. Selection is aimed at maximization of the expected number of descendants, which is directly related to the energy stored during the reproductive cycle. The main parameters to be optimized are the increments of the interneuronal links and the working-memory constants.
NASA Astrophysics Data System (ADS)
Oh, Hyun-Ung; Lee, Min-Kyu; Shin, Somin; Hong, Joo-Sung
2011-09-01
Spaceborne pulse tube type cryocoolers are widely used for providing cryogenic temperatures for sensitive infrared, gamma-ray and X-ray detectors. Thermal control for the compressor of the cryocooler is one of the important technologies for the cooling performance, mission life time, and jitter stability of the cooler. The thermal design of the compressor assembly proposed in this study is basically composed of a heat pipe, a radiator, and a heater. In the present work, a method for heat pipe implementation is proposed and investigated to ensure the jitter stability of the compressor under the condition that one heat pipe is not working. An optimal design of the radiator that uses ribs for effective use by minimizing the temperature gradient on the radiator and reducing its weight is introduced. The effectiveness of the thermal design of the compressor assembly is demonstrated by on-orbit thermal analysis using the correlated thermal model obtained from the thermal balance test that is performed under a space simulating environment.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
NASA Astrophysics Data System (ADS)
Zeitz, Christian; Scheidat, Tobias; Dittmann, Jana; Vielhauer, Claus; González Agulla, Elisardo; Otero Muras, Enrique; García Mateo, Carmen; Alba Castro, José L.
2008-02-01
Beside the optimization of biometric error rates the overall security system performance in respect to intentional security attacks plays an important role for biometric enabled authentication schemes. As traditionally most user authentication schemes are knowledge and/or possession based, firstly in this paper we present a methodology for a security analysis of Internet-based biometric authentication systems by enhancing known methodologies such as the CERT attack-taxonomy with a more detailed view on the OSI-Model. Secondly as proof of concept, the guidelines extracted from this methodology are strictly applied to an open source Internet-based biometric authentication system (BioWebAuth). As case studies, two exemplary attacks, based on the found security leaks, are investigated and the attack performance is presented to show that during the biometric authentication schemes beside biometric error performance tuning also security issues need to be addressed. Finally, some design recommendations are given in order to ensure a minimum security level.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.
Lochmatter, Samuel; Holliger, Christof
2014-08-01
The transformation of conventional flocculent sludge to aerobic granular sludge (AGS) biologically removing carbon, nitrogen and phosphorus (COD, N, P) is still a main challenge in startup of AGS sequencing batch reactors (AGS-SBRs). On the one hand a rapid granulation is desired, on the other hand good biological nutrient removal capacities have to be maintained. So far, several operation parameters have been studied separately, which makes it difficult to compare their impacts. We investigated seven operation parameters in parallel by applying a Plackett-Burman experimental design approach with the aim to propose an optimized startup strategy. Five out of the seven tested parameters had a significant impact on the startup duration. The conditions identified to allow a rapid startup of AGS-SBRs with good nutrient removal performances were (i) alternation of high and low dissolved oxygen phases during aeration, (ii) a settling strategy avoiding too high biomass washout during the first weeks of reactor operation, (iii) adaptation of the contaminant load in the early stage of the startup in order to ensure that all soluble COD was consumed before the beginning of the aeration phase, (iv) a temperature of 20 °C, and (v) a neutral pH. Under such conditions, it took less than 30 days to produce granular sludge with high removal performances for COD, N, and P. A control run using this optimized startup strategy produced again AGS with good nutrient removal performances within four weeks and the system was stable during the additional operation period of more than 50 days. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
Dorofaeff, Tavey; Bandini, Rossella M; Lipman, Jeffrey; Ballot, Daynia E; Roberts, Jason A; Parker, Suzanne L
2016-09-01
With a decreasing supply of antibiotics that are effective against the pathogens that cause sepsis, it is critical that we learn to use currently available antibiotics optimally. Pharmacokinetic studies provide an evidence base from which we can optimize antibiotic dosing. However, these studies are challenging in critically ill neonate and pediatric patients due to the small blood volumes and associated risks and burden to the patient from taking blood. We investigate whether microsampling, that is, obtaining a biologic sample of low volume (<50 μL), can improve opportunities to conduct pharmacokinetic studies. We performed a literature search to find relevant articles using the following search terms: sepsis, critically ill, severe infection, intensive care AND antibiotic, pharmacokinetic, p(a)ediatric, neonate. For microsampling, we performed a search using antibiotics AND dried blood spots OR dried plasma spots OR volumetric absorptive microsampling OR solid-phase microextraction OR capillary microsampling OR microsampling. Databases searched include Web of Knowledge, PubMed, and EMbase. Of the 32 antibiotic pharmacokinetic studies performed on critically ill neonate or pediatric patients in this review, most of the authors identified changes to the pharmacokinetic properties in their patient group and recommended either further investigations into this patient population or therapeutic drug monitoring to ensure antibiotic doses are suitable. There remain considerable gaps in knowledge regarding the pharmacokinetic properties of antibiotics in critically ill pediatric patients. Implementing microsampling in an antibiotic pharmacokinetic study is contingent on the properties of the antibiotic, the pathophysiology of the patient (and how this can affect the microsample), and the location of the patient. A validation of the sampling technique is required before implementation. Current antibiotic regimens for critically ill neonate and pediatric patients are frequently suboptimal due to a poor understanding of altered pharmacokinetic properties. An assessment of the suitability of microsampling for pharmacokinetic studies in neonate and pediatric patients is recommended before wider use. The method of sampling, as well as the method of bioanalysis, also requires validation to ensure the data obtained reflect the true result. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
Enhanced method of fast re-routing with load balancing in software-defined networks
NASA Astrophysics Data System (ADS)
Lemeshko, Oleksandr; Yeremenko, Oleksandra
2017-11-01
A two-level method of fast re-routing with load balancing in a software-defined network (SDN) is proposed. The novelty of the method consists, firstly, in the introduction of a two-level hierarchy of calculating the routing variables responsible for the formation of the primary and backup paths, and secondly, in ensuring a balanced load of the communication links of the network, which meets the requirements of the traffic engineering concept. The method provides implementation of link, node, path, and bandwidth protection schemes for fast re-routing in SDN. The separation in accordance with the interaction prediction principle along two hierarchical levels of the calculation functions of the primary (lower level) and backup (upper level) routes allowed to abandon the initial sufficiently large and nonlinear optimization problem by transiting to the iterative solution of linear optimization problems of half the dimension. The analysis of the proposed method confirmed its efficiency and effectiveness in terms of obtaining optimal solutions for ensuring balanced load of communication links and implementing the required network element protection schemes for fast re-routing in SDN.
Research on Heat Dissipation of Electric Vehicle Based on Safety Architecture Optimization
NASA Astrophysics Data System (ADS)
Zhou, Chao; Guo, Yajuan; Huang, Wei; Jiang, Haitao; Wu, Liwei
2017-10-01
In order to solve the problem of excessive temperature in the discharge process of lithium-ion battery and the temperature difference between batteries, a heat dissipation of electric vehicle based on safety architecture optimization is designed. The simulation is used to optimize the temperature field of the heat dissipation of the battery. A reasonable heat dissipation control scheme is formulated to achieve heat dissipation requirements. The results show that the ideal working temperature range of the lithium ion battery is 20?∼45?, and the temperature difference between the batteries should be controlled within 5?. A cooling fan is arranged at the original air outlet of the battery model, and the two cooling fans work in turn to realize the reciprocating flow. The temperature difference is controlled within 5? to ensure the good temperature uniformity between the batteries of the electric vehicle. Based on the above finding, it is concluded that the heat dissipation design for electric vehicle batteries is safe and effective, which is the most effective methods to ensure battery life and vehicle safety.
Entry Guidance for the 2011 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
Mendeck, Gavin F.; Craig, Lynn E.
2011-01-01
The 2011 Mars Science Laboratory will be the first Mars mission to attempt a guided entry to safely deliver the rover to a touchdown ellipse of 25 km x 20 km. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control the range flown. For application to Mars landers which must make use of the tenuous Martian atmosphere, it is critical to balance the lift of the vehicle to minimize the range error while still ensuring a safe deploy altitude. An overview of the process to generate optimized guidance settings is presented, discussing improvements made over the last nine years. Key dispersions driving deploy ellipse and altitude performance are identified. Performance sensitivities including attitude initialization error and the velocity of transition from range control to heading alignment are presented.
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
DNAPrint Genomics, Inc.: better drugs for segmented markets.
Frudakis, Tony
2008-02-01
The postgenome era promises more efficient drug-development cycles and medications targeted to compatible populations, resulting in improved outcomes, fewer drug-company failures, less litigation, fewer recalls and a refurbished image of 'pharma' in the mind of the customer. DNAPrint was founded to help precipitate these changes. Since 1999, we have developed and optimized novel methods for assessing patient response proclivities as individuals but also as constituents of populations, and we have introduced a computational platform for modeling drug biology. We expect these tools will allow us to maximize the efficiency of our clinical trials and, more importantly, ensure better postmarket performance parameters. With these tools, we are now carefully engineering select drug-development projects in an attempt to illustrate the viability of a novel drug-development model - one based on the application of intelligence and new technologies for superior drug performance in segmented markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Nam-Gyu; Grätzel, Michael; Miyasaka, Tsutomu
Solar cells employing a halide perovskite with an organic cation now show power conversion efficiency of up to 22%. But, these cells are facing issues towards commercialization, such as the need to achieve long-term stability and the development of a manufacturing method for the reproducible fabrication of high-performance devices. We propose a strategy to obtain stable and commercially viable perovskite solar cells. A reproducible manufacturing method is suggested, as well as routes to manage grain boundaries and interfacial charge transport. Electroluminescence is regarded as a metric to gauge theoretical efficiency. We highlight how optimizing the design of device architectures ismore » important not only for achieving high efficiency but also for hysteresis-free and stable performance. Here, we argue that reliable device characterization is needed to ensure the advance of this technology towards practical applications. We believe that perovskite-based devices can be competitive with silicon solar modules, and discuss issues related to the safe management of toxic material.« less
NASA Astrophysics Data System (ADS)
Baker, Jameson Todd
The complex dose patterns that result in Intensity Modulated Radiation Therapy make the typical QA of a second calculation insufficient for ensuring safe treatment of patients. Many facilities choose to deliver the treatment to film inserted in a phantom and calculate the dose delivered as an additional check of the treatment plan. Radiochromic films allow for measurements without the use of a processor in the current digital age. International Specialty Products developed Gafchromic EBT film, which is a radiochromic film having a useful range of 1 -- 800 cGy. EBT film properties are fully analyzed including studies of uniformity, spectral absorption, exposure sensitivity, energy dependence and post exposure density growth. Dosimetric performance on commercially available digitizers is studied with specific attention on the shortcomings. Finally, a custom designed scanner is built specifically for EBT film and its unique properties. Performance of the EBT digitizer is analyzed and compared against currently available scanners.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
One of the most valuable unique characteristics of the PCA is the high count rates (100,000 c/s) it can record, and the resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle-1 work on Sco X-1 has shown that performing high count rate observations is very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of-the-art high count rate observations covering all of the most crucial aspects of the source variability.
NASA Astrophysics Data System (ADS)
Abdelbaki, Chérifa; Benchaib, Mohamed Mouâd; Benziada, Salim; Mahmoudi, Hacène; Goosen, Mattheus
2017-06-01
For more effective management of water distribution network in an arid region, Mapinfo GIS (8.0) software was coupled with a hydraulic model (EPANET 2.0) and applied to a case study region, Chetouane, situated in the north-west of Algeria. The area is characterized not only by water scarcity but also by poor water management practices. The results showed that a combination of GIS and modeling permits network operators to better analyze malfunctions with a resulting more rapid response as well as facilitating in an improved understanding of the work performed on the network. The grouping of GIS and modeling as an operating tool allows managers to diagnosis a network, to study solutions of problems and to predict future situations. The later can assist them in making informed decisions to ensure an acceptable performance level for optimal network operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, P; Labby, Z; Bayliss, R A
Purpose: To develop a plan comparison tool that will ensure robustness and deliverability through analysis of baseline and online-adaptive radiotherapy plans using similarity metrics. Methods: The ViewRay MRIdian treatment planning system allows export of a plan file that contains plan and delivery information. A software tool was developed to read and compare two plans, providing information and metrics to assess their similarity. In addition to performing direct comparisons (e.g. demographics, ROI volumes, number of segments, total beam-on time), the tool computes and presents histograms of derived metrics (e.g. step-and-shoot segment field sizes, segment average leaf gaps). Such metrics were investigatedmore » for their ability to predict that an online-adapted plan reasonably similar to a baseline plan where deliverability has already been established. Results: In the realm of online-adaptive planning, comparing ROI volumes offers a sanity check to verify observations found during contouring. Beyond ROI analysis, it has been found that simply editing contours and re-optimizing to adapt treatment can produce a delivery that is substantially different than the baseline plan (e.g. number of segments increased by 31%), with no changes in optimization parameters and only minor changes in anatomy. Currently the tool can quickly identify large omissions or deviations from baseline expectations. As our online-adaptive patient population increases, we will continue to develop and refine quantitative acceptance criteria for adapted plans and relate them historical delivery QA measurements. Conclusion: The plan comparison tool is in clinical use and reports a wide range of comparison metrics, illustrating key differences between two plans. This independent check is accomplished in seconds and can be performed in parallel to other tasks in the online-adaptive workflow. Current use prevents large planning or delivery errors from occurring, and ongoing refinements will lead to increased assurance of plan quality.« less
The business case for building better hospitals through evidence-based design.
Sadler, Blair L; DuBose, Jennifer; Zimring, Craig
2008-01-01
After establishing the connection between building well-designed evidence-based facilities and improved safety and quality for patients, families, and staff, this article presents the compelling business case for doing so. It demonstrates why ongoing operating savings and initial capital costs must be analyzed and describes specific steps to ensure that design innovations are implemented effectively. Hospital leaders and boards are now beginning to face a new reality: They can no longer tolerate preventable hospital-acquired conditions such as infections, falls, and injuries to staff or unnecessary intra-hospital patient transfers that can increase errors. Nor can they subject patients and families to noisy, confusing environments that increase anxiety and stress. They must effectively deploy all reasonable quality improvement techniques available. To be optimally effective, a variety of tactics must be combined and implemented in an integrated way. Hospital leadership must understand the clear connection between building well-designed healing environments and improved healthcare safety and quality for patients, families, and staff, as well as the compelling business case for doing so. Emerging pay-for-performance (P4P) methodologies that reward hospitals for quality and refuse to pay hospitals for the harm they cause (e.g., infections and falls) further strengthen this business case. When planning to build a new hospital or to renovate an existing facility, healthcare leaders should address a key question: Will the proposed project incorporate all relevant and proven evidence-based design innovations to optimize patient safety, quality, and satisfaction as well as workforce safety, satisfaction, productivity, and energy efficiency? When conducting a business case analysis for a new project, hospital leaders should consider ongoing operating savings and the market share impact of evidence-based design interventions as well as initial capital costs. They should consider taking the 10 steps recommended to ensure an optimal, cost-effective hospital environment. A return-on-investment (ROI) framework is put forward for the use of individual organizations.
Funding breakthrough therapies: A systematic review and recommendation.
Hanna, E; Toumi, M; Dussart, C; Borissov, B; Dabbous, O; Badora, K; Auquier, P
2018-03-01
Advanced therapy medicinal products (ATMPs) are innovative therapies likely associated with high prices. Payers need guidance to create a balance between ensuring patient access to breakthrough therapies and maintaining the financial sustainability of the healthcare system. The aims of this study were to identify, define, classify and compare the approaches to funding high-cost medicines proposed in the literature, to analyze their appropriateness for ATMP funding and to suggest an optimal funding model for ATMPs. Forty-eight articles suggesting new funding models for innovative high-cost therapies were identified. The models were classified into 3 groups: financial agreement, health outcomes-based agreement and healthcoin. Financial agreement encompassed: discounts, rebates, price and volume caps, price-volume agreements, loans, cost-plus price, intellectual-based payment and fund-based payment. Health outcomes-based agreements were defined as agreements between manufacturers and payers based on drug performance, and were divided into performance-based payment and coverage with evidence development. Healthcoin described a new suggested tradeable currency used to assign monetary value to incremental outcomes. With a large number of ATMPs in development, it is time for stakeholders to start thinking about new pathways and funding strategies for these innovative high-cost therapies. An "ATMP-specific fund" may constitute a reasonable solution to ensure rapid patient access to innovation without threatening the sustainability of the health care system. Copyright © 2017 Elsevier B.V. All rights reserved.
Cost effective campaigning in social networks
NASA Astrophysics Data System (ADS)
Kotnis, Bhushan; Kuri, Joy
2016-05-01
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.
How could Health Information Exchange Better Meet the Needs of Care Practitioners?
Kaushal, R.; Vest, J.R.
2014-01-01
Summary Background Health information exchange (HIE) has the potential to improve the quality of healthcare by enabling providers with better access to patient information from multiple sources at the point of care. However, HIE efforts have historically been difficult to establish in the US and the failure rates of organizations created to foster HIE have been high. Objectives We sought to better understand how RHIO-based HIE systems were used in practice and the challenges care practitioners face using them. The objective of our study were to so investigate how HIE can better meet the needs of care practitioners. Methods We performed a multiple-case study using qualitative methods in three communities in New York State. We conducted interviews onsite and by telephone with HIE users and non-users and observed the workflows of healthcare professionals at multiple healthcare organizations participating in a local HIE effort in New York State. Results The empirical data analysis suggests that challenges still remain in increasing provider usage, optimizing HIE implementations and connecting HIE systems across geographic regions. Important determinants of system usage and perceived value includes users experienced level of available information and the fit of use for physician workflows. Conclusions Challenges still remain in increasing provider adoption, optimizing HIE implementations, and demonstrating value. The inability to find information reduced usage of HIE. Healthcare organizations, HIE facilitating organizations, and states can help support HIE adoption by ensuring patient information is accessible to providers through increasing patient consents, fostering broader participation, and by ensuring systems are usable. PMID:25589903
NASA Astrophysics Data System (ADS)
Koreanschi, Andreea
In order to answer the problem of 'how to reduce the aerospace industry's environment footprint?' new morphing technologies were developed. These technologies were aimed at reducing the aircraft's fuel consumption through reduction of the wing drag. The morphing concept used in the present research consists of replacing the conventional aluminium upper surface of the wing with a flexible composite skin for morphing abilities. For the ATR-42 'Morphing wing' project, the wing models were manufactured entirely from composite materials and the morphing region was optimized for flexibility. In this project two rigid wing models and an active morphing wing model were designed, manufactured and wind tunnel tested. For the CRIAQ MDO 505 project, a full scale wing-tip equipped with two types of ailerons, conventional and morphing, was designed, optimized, manufactured, bench and wind tunnel tested. The morphing concept was applied on a real wing internal structure and incorporated aerodynamic, structural and control constraints specific to a multidisciplinary approach. Numerical optimization, aerodynamic analysis and experimental validation were performed for both the CRIAQ MDO 505 full scale wing-tip demonstrator and the ATR-42 reduced scale wing models. In order to improve the aerodynamic performances of the ATR-42 and CRIAQ MDO 505 wing airfoils, three global optimization algorithms were developed, tested and compared. The three algorithms were: the genetic algorithm, the artificial bee colony and the gradient descent. The algorithms were coupled with the two-dimensional aerodynamic solver XFoil. XFoil is known for its rapid convergence, robustness and use of the semi-empirical e n method for determining the position of the flow transition from laminar to turbulent. Based on the performance comparison between the algorithms, the genetic algorithm was chosen for the optimization of the ATR-42 and CRIAQ MDO 505 wing airfoils. The optimization algorithm was improved during the CRIAQ MDO 505 project for convergence speed by introducing a two-step cross-over function. Structural constraints were introduced in the algorithm at each aero-structural optimization interaction, allowing a better manipulation of the algorithm and giving it more capabilities of morphing combinations. The CRIAQ MDO 505 project envisioned a morphing aileron concept for the morphing upper surface wing. For this morphing aileron concept, two optimization methods were developed. The methods used the already developed genetic algorithm and each method had a different design concept. The first method was based on the morphing upper surface concept, using actuation points to achieve the desired shape. The second method was based on the hinge rotation concept of the conventional aileron but applied at multiple nodes along the aileron camber to achieve the desired shape. Both methods were constrained by manufacturing and aerodynamic requirements. The purpose of the morphing aileron methods was to obtain an aileron shape with a smoother pressure distribution gradient during deflection than the conventional aileron. The aerodynamic optimization results were used for the structural optimization and design of the wing, particularly the flexible composite skin. Due to the structural changes performed on the initial wing-tip structure, an aeroelastic behaviour analysis, more specific on flutter phenomenon, was performed. The analyses were done to ensure the structural integrity of the wing-tip demonstrator during wind tunnel tests. Three wind tunnel tests were performed for the CRIAQ MDO 505 wing-tip demonstrator at the IAR-NRC subsonic wind tunnel facility in Ottawa. The first two tests were performed for the wing-tip equipped with conventional aileron. The purpose of these tests was to validate the control system designed for the morphing upper surface, the numerical optimization and aerodynamic analysis and to evaluate the optimization efficiency on the boundary layer behaviour and the wing drag. The third set of wind tunnel tests was performed on the wing-tip equipped with a morphing aileron. The purpose of this test was to evaluate the performances of the morphing aileron, in conjunction with the active morphing upper surface, and their effect on the lift, drag and boundary layer behaviour. Transition data, obtained from Infrared Thermography, and pressure data, extracted from Kulite and pressure taps recordings, were used to validate the numerical optimization and aerodynamic performances of the wing-tip demonstrator. A set of wind tunnel tests was performed on the ATR-42 rigid wing models at the Price-Paidoussis subsonic wind tunnel at Ecole de technologie Superieure. The results from the pressure taps recordings were used to validate the numerical optimization. A second derivative of the pressure distribution method was applied to evaluate the transition region on the upper surface of the wing models for comparison with the numerical transition values. (Abstract shortened by ProQuest.).
Optimization of Second Fault Detection Thresholds to Maximize Mission POS
NASA Technical Reports Server (NTRS)
Anzalone, Evan
2018-01-01
In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
Thermally-Constrained Fuel-Optimal ISS Maneuvers
NASA Technical Reports Server (NTRS)
Bhatt, Sagar; Svecz, Andrew; Alaniz, Abran; Jang, Jiann-Woei; Nguyen, Louis; Spanos, Pol
2015-01-01
Optimal Propellant Maneuvers (OPMs) are now being used to rotate the International Space Station (ISS) and have saved hundreds of kilograms of propellant over the last two years. The savings are achieved by commanding the ISS to follow a pre-planned attitude trajectory optimized to take advantage of environmental torques. The trajectory is obtained by solving an optimal control problem. Prior to use on orbit, OPM trajectories are screened to ensure a static sun vector (SSV) does not occur during the maneuver. The SSV is an indicator that the ISS hardware temperatures may exceed thermal limits, causing damage to the components. In this paper, thermally-constrained fuel-optimal trajectories are presented that avoid an SSV and can be used throughout the year while still reducing propellant consumption significantly.
Optimal second order sliding mode control for nonlinear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-07-01
In this paper, a chattering free optimal second order sliding mode control (OSOSMC) method is proposed to stabilize nonlinear systems affected by uncertainties. The nonlinear optimal control strategy is based on the control Lyapunov function (CLF). For ensuring robustness of the optimal controller in the presence of parametric uncertainty and external disturbances, a sliding mode control scheme is realized by combining an integral and a terminal sliding surface. The resulting second order sliding mode can effectively reduce chattering in the control input. Simulation results confirm the supremacy of the proposed optimal second order sliding mode control over some existing sliding mode controllers in controlling nonlinear systems affected by uncertainty. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Safe Onboard Guidance and Control Under Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars James
2011-01-01
An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.
NASA Astrophysics Data System (ADS)
Xiao, Jie
Polymer nanocomposites have a great potential to be a dominant coating material in a wide range of applications in the automotive, aerospace, ship-making, construction, and pharmaceutical industries. However, how to realize design sustainability of this type of nanostructured materials and how to ensure the true optimality of the product quality and process performance in coating manufacturing remain as a mountaintop area. The major challenges arise from the intrinsic multiscale nature of the material-process-product system and the need to manipulate the high levels of complexity and uncertainty in design and manufacturing processes. This research centers on the development of a comprehensive multiscale computational methodology and a computer-aided tool set that can facilitate multifunctional nanocoating design and application from novel function envisioning and idea refinement, to knowledge discovery and design solution derivation, and further to performance testing in industrial applications and life cycle analysis. The principal idea is to achieve exceptional system performance through concurrent characterization and optimization of materials, product and associated manufacturing processes covering a wide range of length and time scales. Multiscale modeling and simulation techniques ranging from microscopic molecular modeling to classical continuum modeling are seamlessly coupled. The tight integration of different methods and theories at individual scales allows the prediction of macroscopic coating performance from the fundamental molecular behavior. Goal-oriented design is also pursued by integrating additional methods for bio-inspired dynamic optimization and computational task management that can be implemented in a hierarchical computing architecture. Furthermore, multiscale systems methodologies are developed to achieve the best possible material application towards sustainable manufacturing. Automotive coating manufacturing, that involves paint spay and curing, is specifically discussed in this dissertation. Nevertheless, the multiscale considerations for sustainable manufacturing, the novel concept of IPP control, and the new PPDE-based optimization method are applicable to other types of manufacturing, e.g., metal coating development through electroplating. It is demonstrated that the methodological development in this dissertation can greatly facilitate experimentalists in novel material invention and new knowledge discovery. At the same time, they can provide scientific guidance and reveal various new opportunities and effective strategies for sustainable manufacturing.
Nurses' scope of practice and the implication for quality nursing care.
Lubbe, J C Irene; Roets, Lizeth
2014-01-01
This article provides an overview of the implications for patients' health status and care needs when assessments are performed by nurses not licensed or competent to perform this task. The Waterlow scale (Judy Waterlow, The Nook, Stroke Road, Henlade, TAUNTON, TA3 5LX) scenario is used as a practice example to illustrate this case. The international nursing regulatory bodies, in South Africa called the South African Nursing Council, set the scope of practice wherein nurses are allowed to practice. Different categories of nurses are allowed to practice according to specified competencies, in alignment with their scope of practice. A retrospective quantitative study was utilized. A checklist was used to perform an audit on a random sample of 157 out of an accessible population of 849 patient files. Data were gathered in May 2012, and the analysis was done using frequencies and percentages for categorical data. Reliability and validity were ensured, and all ethical principles were adhered to. Eighty percent of risk assessments were performed by nurses not licensed or enrolled to perform this task unsupervised. Areas such as tissue malnutrition, neurological deficits, and medication were inaccurately scored, resulting in 50% of the Waterlow risk-assessment scales, as an example, being incorrectly interpreted. This has implications for quality nursing care and might put the patient and the institution at risk. Lower-category nurses and student nurses should be allowed to perform only tasks within their scope of practice for which they are licensed or enrolled. Nurses with limited formal theoretical training are not adequately prepared to perform tasks unsupervised, even in the current global nursing shortage scenario. To optimize and ensure safe and quality patient care, risk assessments should be done by a registered professional nurse, who will then coordinate the nursing care of the patient with the assistance of the lower category of nurses. © 2013 The Authors. Journal of Nursing Scholarship published by Wiley Periodicals, Inc. on behalf of Sigma Theta Tau International.
Zeev, Yael Bar; Bonevski, Billie; Twyman, Laura; Watt, Kerrianne; Atkins, Lou; Palazzi, Kerrin; Oldmeadow, Christopher; Gould, Gillian S
2017-05-01
Similar to other high-income countries, smoking rates in pregnancy can be high in specific vulnerable groups in Australia. Several clinical guidelines exist, including the 5A's (Ask, Advice, Assess, Assist, and Arrange), ABCD (Ask, Brief advice, Cessation, and Discuss), and AAR (Ask, Advice, and Refer). There is lack of data on provision of smoking cessation care (SCC) of Australian General Practitioners (GPs) and Obstetricians. A cross-sectional survey explored the provision of SCC, barriers and enablers using the Theoretical Domains Framework, and the associations between them. Two samples were invited: (1) GPs and Obstetricians from a college database (n = 5571); (2) GPs from a special interest group for Indigenous health (n = 500). Dimension reduction for the Theoretical Domains Framework was achieved with factor analysis. Logistic regression was carried out for performing all the 5A's and the AAR. Performing all of the 5A's, ABCD, and AAR "often and always" was reported by 19.9%, 15.6%, and 49.2% respectively. "Internal influences" (such as confidence in counselling) were associated with higher performance of the 5A's (Adjusted OR 2.69 (95% CI 1.5, 4.8), p < .001), whereas "External influences" (such as workplace routine) were associated with higher performance of AAR (Adjusted OR 1.7 (95% CI 1, 2.8), p = .035). Performance in providing SCC to pregnant women is low among Australian GPs and Obstetricians. Training clinicians should focus on improving internal influences such as confidence and optimism. The AAR may be easier to implement, and interventions at the service level should focus on ensuring easy, effective, and acceptable referral mechanisms are in place. Improving provision of the 5A's approach should focus on the individual level, including better training for GPs and Obstetricians, designed to improve specific "internal" barriers such as confidence in counselling and optimism. The AAR may be easier to implement in view of the higher overall performance of this approach. Interventions on a more systemic level need to ensure easy, effective, and acceptable referral mechanisms are in place. More research is needed specifically on the acceptability of the Quitline for pregnant women, both Indigenous and non-Indigenous. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.
Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less
Zhang, Da-song; Chu, Jian
2014-01-01
Based on the 6D constraints of momentum change rate (CMCR), this paper puts forward a real-time and full balance maintenance method for the humanoid robot during high-speed movement of its 7-DOF arm. First, the total momentum formula for the robot's two arms is given and the momentum change rate is defined by the time derivative of the total momentum. The author also illustrates the idea of full balance maintenance and analyzes the physical meaning of 6D CMCR and its fundamental relation to full balance maintenance. Moreover, discretization and optimization solution of CMCR has been provided with the motion constraint of the auxiliary arm's joint, and the solving algorithm is optimized. The simulation results have shown the validity and generality of the proposed method on the full balance maintenance in the 6 DOFs of the robot body under 6D CMCR. This method ensures 6D dynamics balance performance and increases abundant ZMP stability margin. The resulting motion of the auxiliary arm has large abundance in joint space, and the angular velocity and the angular acceleration of these joints lie within the predefined limits. The proposed algorithm also has good real-time performance. PMID:24883404
Technical aspects on production of fluid extract from Brosimum gaudichaudii Trécul roots
Martins, Frederico Severino; Pascoa, Henrique; de Paula, José Realino; da Conceição, Edemilson Cardoso
2015-01-01
Instruction: Despite the increased use of Brosimum gaudichaudii roots as raw material on medicine to treatment of vitiligo, there are not studies that showing the impact of unit operations on the quality and standardized of the extract of B. gaudichaudii. The quality of the herbal extract is essential to ensure the safety and efficacy of pharmaceutical product. Due the medical and commercial importance, this study aimed to evaluate the impact of the extraction method (ultrasound or percolation) on the quality of herbal extract and optimize the extraction of psoralen and 8-methoxypsoralen (8-MOP) from B. gaudichaudii. Materials and Methods: The extraction recovery was evaluate by high-performance liquid chromatography (C8 reverse phase column and acetonitrile: Water 45:55 and flow rate 0.6 mL/min). The extraction was performed by ultrasound-assisted extraction (UEA) or percolation using a Box-Behnken design. Results: From both chemical markers (psoralen and bergapten), the optimal conditions for the UEA were an extraction time of 25 min, the mean particle size of 100 μm, and an ethanol: Water ratio of 55:45 (v/v). Conclusion: The extraction by percolation revealed that ethanol 55% was more efficient than ethanol 80% to extract psoralen and bergapten. PMID:25709236
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-01-01
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746
Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam
2013-07-17
Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.
NASA Technical Reports Server (NTRS)
Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.
1976-01-01
Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.
Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations
Chavez, Brittany K.; Agarabi, Cyrus D.; Read, Erik K.; ...
2016-01-01
Formulating appropriate storage conditions for biopharmaceutical proteins is essential for ensuring their stability and thereby their purity, potency, and safety over their shelf-life. Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case. The stability of the antibody in each buffer formulation was assessed by UV/VIS absorbance at 280 nm and 410 nm and size exclusion high performance liquid chromatography (SEC) to determine overall solubility, opalescence, and aggregate formation, respectively. Upon preliminary testing, acetate was eliminated as a potentialmore » storage buffer due to significant visible precipitate formation. An additional 2 4full factorial DoE was performed that combined the stabilizing effect of arginine with the buffering capacity of histidine. From this final DoE, an optimized formulation of 200 mM arginine, 50 mM histidine, and 100 mM NaCl at a pH of 6.5 was identified to substantially improve stability under long-term storage conditions and after multiple freeze/thaw cycles. Therefore, our data highlights the power of DoE based formulation screening approaches even for challenging monoclonal antibody molecules.« less
Design and Analysis of Optimal Ascent Trajectories for Stratospheric Airships
NASA Astrophysics Data System (ADS)
Mueller, Joseph Bernard
Stratospheric airships are lighter-than-air vehicles that have the potential to provide a long-duration airborne presence at altitudes of 18-22 km. Designed to operate on solar power in the calm portion of the lower stratosphere and above all regulated air traffic and cloud cover, these vehicles represent an emerging platform that resides between conventional aircraft and satellites. A particular challenge for airship operation is the planning of ascent trajectories, as the slow moving vehicle must traverse the high wind region of the jet stream. Due to large changes in wind speed and direction across altitude and the susceptibility of airship motion to wind, the trajectory must be carefully planned, preferably optimized, in order to ensure that the desired station be reached within acceptable performance bounds of flight time and energy consumption. This thesis develops optimal ascent trajectories for stratospheric airships, examines the structure and sensitivity of these solutions, and presents a strategy for onboard guidance. Optimal ascent trajectories are developed that utilize wind energy to achieve minimum-time and minimum-energy flights. The airship is represented by a three-dimensional point mass model, and the equations of motion include aerodynamic lift and drag, vectored thrust, added mass effects, and accelerations due to mass flow rate, wind rates, and Earth rotation. A representative wind profile is developed based on historical meteorological data and measurements. Trajectory optimization is performed by first defining an optimal control problem with both terminal and path constraints, then using direct transcription to develop an approximate nonlinear parameter optimization problem of finite dimension. Optimal ascent trajectories are determined using SNOPT for a variety of upwind, downwind, and crosswind launch locations. Results of extensive optimization solutions illustrate definitive patterns in the ascent path for minimum time flights across varying launch locations, and show that significant energy savings can be realized with minimum-energy flights, compared to minimum-time time flights, given small increases in flight time. The performance of the optimal trajectories are then studied with respect to solar energy production during ascent, as well as sensitivity of the solutions to small changes in drag coefficient and wind model parameters. Results of solar power model simulations indicate that solar energy is sufficient to power ascent flights, but that significant energy loss can occur for certain types of trajectories. Sensitivity to the drag and wind model is approximated through numerical simulations, showing that optimal solutions change gradually with respect to changing wind and drag parameters and providing deeper insight into the characteristics of optimal airship flights. Finally, alternative methods are developed to generate near-optimal ascent trajectories in a manner suitable for onboard implementation. The structures and characteristics of previously developed minimum-time and minimum-energy ascent trajectories are used to construct simplified trajectory models, which are efficiently solved in a smaller numerical optimization problem. Comparison of these alternative solutions to the original SNOPT solutions show excellent agreement, suggesting the alternate formulations are an effective means to develop near-optimal solutions in an onboard setting.
Xu, Tianhua; Karanov, Boris; Shevchenko, Nikita A; Lavery, Domaniç; Liga, Gabriele; Killey, Robert I; Bayvel, Polina
2017-10-11
Nyquist-spaced transmission and digital signal processing have proved effective in maximising the spectral efficiency and reach of optical communication systems. In these systems, Kerr nonlinearity determines the performance limits, and leads to spectral broadening of the signals propagating in the fibre. Although digital nonlinearity compensation was validated to be promising for mitigating Kerr nonlinearities, the impact of spectral broadening on nonlinearity compensation has never been quantified. In this paper, the performance of multi-channel digital back-propagation (MC-DBP) for compensating fibre nonlinearities in Nyquist-spaced optical communication systems is investigated, when the effect of signal spectral broadening is considered. It is found that accounting for the spectral broadening effect is crucial for achieving the best performance of DBP in both single-channel and multi-channel communication systems, independent of modulation formats used. For multi-channel systems, the degradation of DBP performance due to neglecting the spectral broadening effect in the compensation is more significant for outer channels. Our work also quantified the minimum bandwidths of optical receivers and signal processing devices to ensure the optimal compensation of deterministic nonlinear distortions.
NASA Astrophysics Data System (ADS)
Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.
2016-12-01
Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.
Casa, Douglas J.
1999-01-01
Objective: To acquaint athletic trainers with the numerous interrelated components that must be considered when assisting athletes who exercise in hot environments. Useful guidelines to maximize performance and minimize detrimental health consequences are presented. Data Sources: The databases MEDLINE and SPORT Discus were searched from 1980 to 1999, with the terms. “body cooling,” “dehydration,” “exercise,” “heat illnesses,” “heat,” “fluid replacement,” “acclimatization,” “hydration,” “rehydration,” “performance,” and “intravenous,” among others. Data Synthesis: This paper provides an in-depth look at issues regarding physiologic and performance considerations related to rehydration, strategies to maximize rehydration, modes of rehydration, health consequences of exercise in the heat, heat acclimatization, body cooling techniques, and practice and competition modifications. Conclusions/Recommendations: Athletic trainers have a responsibility to ensure that athletes who exercise in hot environments are prepared to do so in an optimal manner and to act properly to avoid the potentially harmful heat illnesses that can result from exercise in the heat. PMID:16558573
Cognitive Functioning in Space Exploration Missions: A Human Requirement
NASA Technical Reports Server (NTRS)
Fiedler, Edan; Woolford, Barbara
2005-01-01
Solving cognitive issues in the exploration missions will require implementing results from both Human Behavior and Performance, and Space Human Factors Engineering. Operational and research cognitive requirements need to reflect a coordinated management approach with appropriate oversight and guidance from NASA headquarters. First, this paper will discuss one proposed management method that would combine the resources of Space Medicine and Space Human Factors Engineering at JSC, other NASA agencies, the National Space Biomedical Research Institute, Wyle Labs, and other academic or industrial partners. The proposed management is based on a Human Centered Design that advocates full acceptance of the human as a system equal to other systems. Like other systems, the human is a system with many subsystems, each of which has strengths and limitations. Second, this paper will suggest ways to inform exploration policy about what is needed for optimal cognitive functioning of the astronaut crew, as well as requirements to ensure necessary assessment and intervention strategies for the human system if human limitations are reached. Assessment strategies will include clinical evaluation and fitness-to-perform evaluations. Clinical intervention tools and procedures will be available to the astronaut and space flight physician. Cognitive performance will be supported through systematic function allocation, task design, training, and scheduling. Human factors requirements and guidelines will lead to well-designed information displays and retrieval systems that reduce crew time and errors. Means of capturing process, design, and operational requirements to ensure crew performance will be discussed. Third, this paper will describe the current plan of action, and future challenges to be resolved before a lunar or Mars expedition. The presentation will include a proposed management plan for research, involvement of various organizations, and a timetable of deliverables.
The Role of Space Medicine in Management of Risk in Spaceflight
NASA Technical Reports Server (NTRS)
Clark, Jonathan B.
2001-01-01
The purpose of Space Medicine is to ensure mission success by providing quality and comprehensive health care throughout all mission phases to optimize crew health and performance and to prevent negative long-term health consequences. Space flight presents additional hazards and associated risks to crew health, performance, and safety. With an extended human presence in space it is expected that illness and injury will occur on orbit, which may present a significant threat to crew health and performance and to mission success. Maintaining crew health, safety and performance and preventing illness and injury are high priorities necessary for mission success and agency goals. Space flight health care should meet the standards of practice of evidence based clinical medicine. The function of Space Medicine is expected to meet the agency goals as stated in the 1998 NASA Strategic Plan and the priorities established by the Critical Path Roadmap Project. The Critical Path Roadmap Project is an integrated NASA cross-disciplinary strategy to assess, understand, mitigate, and manage the risks associated with long-term exposure to the space flight environment. The evidence based approach to space medicine should be standardized, objective process yielding expected results and establishing clinical practice standards while balancing individual risk with mission (programmatic) risk. The ability to methodically apply available knowledge and expertise to individual and mission health issues will ensure appropriate priorities are assigned and resources are allocated. NASA Space Medicine risk management process is a combined clinical and engineering approach. Competition for weight, power, volume, cost, and crew time must be balanced in making decisions about the care of individual crew with competing agency resources.
ERIC Educational Resources Information Center
Stevens, Christopher John; Dascombe, Ben James
2015-01-01
Sports performance testing is one of the most common and important measures used in sport science. Performance testing protocols must have high reliability to ensure any changes are not due to measurement error or inter-individual differences. High validity is also important to ensure test performance reflects true performance. Time-trial…
CF60 Concrete Composition Design and Application on Fudiankou Xijiang Super Large Bridge
NASA Astrophysics Data System (ADS)
Qiu, Yi Mei; Wen, Sen Yuan; Chen, Jun Xiang
2018-06-01
Guangxi Wuzhou City Ring Road Fudiankou Xijiang super large bridge CF60 concrete is a new multi-phase composite high-performance concrete, this paper for the Fudiankou Xijiang bridge structure and characteristics of the project, in accordance with the principle of local materials and technical specification requirements, combined with the site conditions of CF60 engineering high performance concrete component materials, proportion and the technical performance, quantify the main physical and mechanical performance index. Analysis main influencing factors of the technical indicators, reasonable adjustment of concrete mix design parameters, and the use of technical means of admixture and multi-function composite admixture of concrete, obtain the optimal proportion of good work, process, mechanical properties stability and durability of engineering properties, recommend and verification of concrete mix; to explore the CF60 high performance concrete Soil in the Fudiankou Xijiang bridge application technology, detection and tracking the quality of concrete construction, concrete structure during the construction of the key technology and control points is proposed, evaluation of CF60 high performance concrete in the actual engineering application effect and benefit to ensure engineering quality of bridge structure and service life, and super long span bridge engineering construction to provide basis and reference.
Astronomical Instrumentation Systems Quality Management Planning: AISQMP
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse
2017-06-01
The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.
Advanced propulsion engine assessment based on a cermet reactor
NASA Technical Reports Server (NTRS)
Parsley, Randy C.
1993-01-01
A preferred Pratt & Whitney conceptual Nuclear Thermal Rocket Engine (NTRE) has been designed based on the fundamental NASA priorities of safety, reliability, cost, and performance. The basic philosophy underlying the design of the XNR2000 is the utilization of the most reliable form of ultrahigh temperature nuclear fuel and development of a core configuration which is optimized for uniform power distribution, operational flexibility, power maneuverability, weight, and robustness. The P&W NTRE system employs a fast spectrum, cermet fueled reactor configured in an expander cycle to ensure maximum operational safety. The cermet fuel form provides retention of fuel and fission products as well as high strength. A high level of confidence is provided by benchmark analysis and independent evaluations.
Astronomical Instrumentation Systems Quality Management Planning: AISQMP (Abstract)
NASA Astrophysics Data System (ADS)
Goldbaum, J.
2017-12-01
(Abstract only) The capability of small aperture astronomical instrumentation systems (AIS) to make meaningful scientific contributions has never been better. The purpose of AIS quality management planning (AISQMP) is to ensure the quality of these contributions such that they are both valid and reliable. The first step involved with AISQMP is to specify objective quality measures not just for the AIS final product, but also for the instrumentation used in its production. The next step is to set up a process to track these measures and control for any unwanted variation. The final step is continual effort applied to reducing variation and obtaining measured values near optimal theoretical performance. This paper provides an overview of AISQMP while focusing on objective quality measures applied to astronomical imaging systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. This project describes an innovative, minimally invasive foundation insulation upgrade technique on an existing home that uses hydrovac excavation technology combined with a liquid insulating foam. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features,more » utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
Naimoli, Joseph F; Frymus, Diana E; Wuliji, Tana; Franco, Lynne M; Newsome, Martha H
2014-10-02
There has been a resurgence of interest in national Community Health Worker (CHW) programs in low- and middle-income countries (LMICs). A lack of strong research evidence persists, however, about the most efficient and effective strategies to ensure optimal, sustained performance of CHWs at scale. To facilitate learning and research to address this knowledge gap, the authors developed a generic CHW logic model that proposes a theoretical causal pathway to improved performance. The logic model draws upon available research and expert knowledge on CHWs in LMICs. Construction of the model entailed a multi-stage, inductive, two-year process. It began with the planning and implementation of a structured review of the existing research on community and health system support for enhanced CHW performance. It continued with a facilitated discussion of review findings with experts during a two-day consultation. The process culminated with the authors' review of consultation-generated documentation, additional analysis, and production of multiple iterations of the model. The generic CHW logic model posits that optimal CHW performance is a function of high quality CHW programming, which is reinforced, sustained, and brought to scale by robust, high-performing health and community systems, both of which mobilize inputs and put in place processes needed to fully achieve performance objectives. Multiple contextual factors can influence CHW programming, system functioning, and CHW performance. The model is a novel contribution to current thinking about CHWs. It places CHW performance at the center of the discussion about CHW programming, recognizes the strengths and limitations of discrete, targeted programs, and is comprehensive, reflecting the current state of both scientific and tacit knowledge about support for improving CHW performance. The model is also a practical tool that offers guidance for continuous learning about what works. Despite the model's limitations and several challenges in translating the potential for learning into tangible learning, the CHW generic logic model provides a solid basis for exploring and testing a causal pathway to improved performance.
Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen
2013-02-01
This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.
Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulicek, Miroslav; Haslinger, Jaroslav; Malek, Josef
2009-10-15
We study a shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to an optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by a generalized stationary Navier-Stokes system with nontrivial mixed boundary conditions. In this paper we prove the existence of solutions both to the generalized Navier-Stokes system and tomore » the shape optimization problem.« less
Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen
2018-01-01
The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These experiments validate the feasibility of proposed real-time wearable HCI system and algorithms, providing a potential assistive device interface for persons with disabilities. PMID:29543737
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, Jaroslav, E-mail: hasling@karlin.mff.cuni.cz; Stebel, Jan, E-mail: stebel@math.cas.cz
2011-04-15
We study the shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to the optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by the generalized Navier-Stokes system with nontrivial boundary conditions. This paper deals with numerical aspects of the problem.
Nutrition Issues for Space Exploration
NASA Technical Reports Server (NTRS)
Smith, Scott; Zwart, Sara R.
2006-01-01
Optimal nutrition will be critical for crew members who embark on space exploration missions. Nutritional assessment provides an opportunity to ensure that crew members begin their missions in optimal nutritional status, to document changes in status during a mission, and to assess changes after landing to facilitate return of the crew to their normal status as soon as possible after landing. Nutritional assessment provides the basis for intervention, if it is necessary, to maintain optimal status throughout the mission. We report here our nutritional assessment of the US astronauts who participated in the first twelve International Space Station missions.
NASA Astrophysics Data System (ADS)
Sameen, Maher Ibrahim; Pradhan, Biswajeet
2016-06-01
In this study, we propose a novel built-up spectral index which was developed by using particle-swarm-optimization (PSO) technique for Worldview-2 images. PSO was used to select the relevant bands from the eight (8) spectral bands of Worldview-2 image and then were used for index development. Multiobiective optimization was used to minimize the number of selected spectral bands and to maximize the classification accuracy. The results showed that the most important and relevant spectral bands among the eight (8) bands for built-up area extraction are band4 (yellow) and band7 (NIR1). Using those relevant spectral bands, the final spectral index was form ulated by developing a normalized band ratio. The validation of the classification result using the proposed spectral index showed that our novel spectral index performs well compared to the existing WV -BI index. The accuracy assessment showed that the new proposed spectral index could extract built-up areas from Worldview-2 image with an area under curve (AUC) of (0.76) indicating the effectiveness of the developed spectral index. Further improvement could be done by using several datasets during the index development process to ensure the transferability of the index to other datasets and study areas.
NASA Astrophysics Data System (ADS)
Lim, Jun-Wei; Beh, Hoe-Guan; Ching, Dennis Ling Chuan; Ho, Yeek-Chia; Baloo, Lavania; Bashir, Mohammed J. K.; Wee, Seng-Kew
2017-11-01
The present study provides an insight into the optimization of a glucose and sucrose mixture to enhance the denitrification process. Central Composite Design was applied to design the batch experiments with the factors of glucose and sucrose measured as carbon-to-nitrogen (C:N) ratio each and the response of percentage removal of nitrate-nitrogen (NO3 --N). Results showed that the polynomial regression model of NO3 --N removal had been successfully derived, capable of describing the interactive relationships of glucose and sucrose mixture that influenced the denitrification process. Furthermore, the presence of glucose was noticed to have more consequential effect on NO3 --N removal as opposed to sucrose. The optimum carbon sources mixture to achieve complete removal of NO3 --N required lesser glucose (C:N ratio of 1.0:1.0) than sucrose (C:N ratio of 2.4:1.0). At the optimum glucose and sucrose mixture, the activated sludge showed faster acclimation towards glucose used to perform the denitrification process. Later upon the acclimation with sucrose, the glucose uptake rate by the activated sludge abated. Therefore, it is vital to optimize the added carbon sources mixture to ensure the rapid and complete removal of NO3 --N via the denitrification process.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Suzuki, Tasuma; Tanaka, Ryohei; Tahara, Marina; Isamu, Yuya; Niinae, Masakazu; Lin, Lin; Wang, Jingbo; Luh, Jeanne; Coronell, Orlando
2016-09-01
While it is known that the performance of reverse osmosis membranes is dependent on their physicochemical properties, the existing literature studying membranes used in treatment facilities generally focuses on foulant layers or performance changes due to fouling, not on the performance and physicochemical changes that occur to the membranes themselves. In this study, the performance and physicochemical properties of a polyamide reverse osmosis membrane used for three years in a seawater desalination plant were compared to those of a corresponding unused membrane. The relationship between performance changes during long-term use and changes in physicochemical properties was evaluated. The results showed that membrane performance deterioration (i.e., reduced water flux, reduced contaminant rejection, and increased fouling propensity) occurred as a result of membrane use in the desalination facility, and that the main physicochemical changes responsible for performance deterioration were reduction in PVA coating coverage and bromine uptake by polyamide. The latter was likely promoted by oxidant residual in the membrane feed water. Our findings indicate that the optimization of membrane materials and processes towards maximizing the stability of the PVA coating and ensuring complete removal of oxidants in feed waters would minimize membrane performance deterioration in water purification facilities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
Understanding London's Water Supply Tradeoffs When Scheduling Interventions Under Deep Uncertainty
NASA Astrophysics Data System (ADS)
Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.
2015-12-01
Water supply planning in many major world cities faces several challenges associated with but not limited to climate change, population growth and insufficient land availability for infrastructure development. Long-term plans to maintain supply-demand balance and ecosystem services require careful consideration of uncertainties associated with future conditions. The current approach for London's water supply planning utilizes least cost optimization of future intervention schedules with limited uncertainty consideration. Recently, the focus of the long-term plans has shifted from solely least cost performance to robustness and resilience of the system. Identifying robust scheduling of interventions requires optimizing over a statistically representative sample of stochastic inputs which may be computationally difficult to achieve. In this study we optimize schedules using an ensemble of plausible scenarios and assess how manipulating that ensemble influences the different Pareto-approximate intervention schedules. We investigate how a major stress event's location in time as well as the optimization problem formulation influence the Pareto-approximate schedules. A bootstrapping method that respects the non-stationary trend of climate change scenarios and ensures the even distribution of the major stress event in the scenario ensemble is proposed. Different bootstrapped hydrological scenario ensembles are assessed using many-objective scenario optimization of London's future water supply and demand intervention scheduling. However, such a "fixed" scheduling of interventions approach does not aim to embed flexibility or adapt effectively as the future unfolds. Alternatively, making decisions based on the observations of occurred conditions could help planners who prefer adaptive planning. We will show how rules to guide the implementation of interventions based on observations may result in more flexible strategies.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, Emerson
2018-01-01
One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.
[Intraprofessional communication during shift change].
Martín Pérez, Sonsoles; Vázquez Calatayud, Mónica; Lizarraga Ursúa, Yolanta; Oroviogoicoechea Ortega, Cristina
2013-05-01
Effective communication between professionals is crucial to ensure patient safety. 1) Explore the intraprofessional communication process during nurse shift change; 2) identify improvement strategies to facilitate optimal communication process. Exploratory study conducted from January to May 2011 in an intermediate unit. There were performed 16 structured observations of the communication process and 4 semistructured interviews and 16 anonymous surveys (designed by the evidence, interviews and observations) to the nurses who agreed to participate in the study. Strengths: complete process and the usefulness of the computer record. lack of common structure, repetition and forgetfulness of information, numerous interruptions during the process and noise. The 68.75% of nurses said that part of the transmitted information was irrelevant and too long. All of them perceived the need for changes in the existing process. Some strategies were identified to improve the development of a guide based on the mnemonic SBAR. It was adapted to the structure of the software as well as a change in location for the transmission of information. We propose to have an effective intraprofessional communication in order to ensure patient safety. In addition the transmission of information during the shift change should be done through a systematic process in a quiet place without interruptions.
Real-time information management environment (RIME)
NASA Astrophysics Data System (ADS)
DeCleene, Brian T.; Griffin, Sean; Matchett, Garry; Niejadlik, Richard
2000-08-01
Whereas data mining and exploitation improve the quality and quantity of information available to the user, there remains a mission requirement to assist the end-user in managing the access to this information and ensuring that the appropriate information is delivered to the right user in time to make decisions and take action. This paper discusses TASC's federated architecture to next- generation information management, contrasts the approach against emerging technologies, and quantifies the performance gains. This architecture and implementation, known as Real-time Information Management Environment (RIME), is based on two key concepts: information utility and content-based channelization. The introduction of utility allows users to express the importance and delivery requirements of their information needs in the context of their mission. Rather than competing for resources on a first-come/first-served basis, the infrastructure employs these utility functions to dynamically react to unanticipated loading by optimizing the delivered information utility. Furthermore, commander's resource policies shape these functions to ensure that resources are allocated according to military doctrine. Using information about the desired content, channelization identifies opportunities to aggregate users onto shared channels reducing redundant transmissions. Hence, channelization increases the information throughput of the system and balances sender/receiver processing load.
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Tan, Thiam-Chye; Tan, Kim-Teng; Tee, John Cs
2007-09-01
The delivery of optimal and safe medical care is critical in healthcare. The traditional practice of "See one, do one and teach one" residency training programme is no longer acceptable. In the past, there was no structured residency training programme in our hospital. There were several cases of organ injuries from surgeries performed by the residents. In 2005, we conducted a pilot study to organise a structured teaching, education, surgical accreditation and assessment (TESA) residency programme for 15 residents in the Division of Obstetrics and Gynaecology, KK Women's and Children's Hospital. We performed a written questionnaire survey of the residents on the new programme and patients' expectation (n = 2926) as subjective outcomes in the 1-year follow-up. We also studied the complication rates of all minor and major surgeries performed by the residents in 2004 and 2005 as an objective outcome. All the residents (n = 15) surveyed supported the TESA programme. Patients' expectation improved significantly from 71% in 2004 (n = 1559) to 83% in 2005 (n = 1367) (P = 0.03). There were 10,755 surgeries in 2004 and 10,558 surgeries in 2005 performed by our residents, with 6 cases (5.6%) of organ injuries in 2004 compared to 3 cases (2.8%) in 2005. This reduction was not statistically significant. The TESA residency programme in our hospital has an impact on the delivery of optimal and safe medical care while ensuring the training of residents to be competent specialists.
Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu
Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less
Understanding and Taking Control of Surgical Learning Curves.
Gofton, Wade T; Papp, Steven R; Gofton, Tyson; Beaulé, Paul E
2016-01-01
As surgical techniques continue to evolve, surgeons will have to integrate new skills into their practice. A learning curve is associated with the integration of any new procedure; therefore, it is important for surgeons who are incorporating a new technique into their practice to understand what the reported learning curve might mean for them and their patients. A learning curve should not be perceived as negative because it can indicate progress; however, surgeons need to understand how to optimize the learning curve to ensure progress with minimal patient risk. It is essential for surgeons who are implementing new procedures or skills to define potential learning curves, examine how a reported learning curve may relate to an individual surgeon's in-practice learning and performance, and suggest methods in which an individual surgeon can modify his or her specific learning curve in order to optimize surgical outcomes and patient safety. A defined personal learning contract may be a practical method for surgeons to proactively manage their individual learning curve and provide evidence of their efforts to safely improve surgical practice.
A distributed approach to the OPF problem
NASA Astrophysics Data System (ADS)
Erseghe, Tomaso
2015-12-01
This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.
Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)
Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu; ...
2018-02-15
Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
Lim, Jun Young; Kim, Namhyun; Park, Jong-Chul; Yoo, Sun K; Shin, Dong Ah; Shim, Kyu-Won
2017-09-01
Cranioplasty for recovering skull defects carries the risk for a number of complications. Various materials are used, including autologous bone graft, metallic materials, and non-metallic materials, each of which has advantages and disadvantages. If the use of autologous bone is not feasible, those artificial materials also have constraints in the case of complex anatomy and/or irregular defects. This study used metal 3D-printing technology to overcome these existing drawbacks and analyze the clinical and mechanical performance requirements. To find an optimal structure that satisfied the structural and mechanical stability requirements, we evaluated biomechanical stability using finite element analysis (FEA) and mechanical testing. To ensure clinical applicability, the model was subjected to histological evaluation. Each specimen was implanted in the femur of a rabbit and was evaluated using histological measurements and push-out test. We believe that our data will provide the basis for future applications of a variety of unit structures and further clinical trials and research, as well as the direction for the study of other patient-specific implants.
Löck, Steffen; Roth, Klaus; Skripcak, Tomas; Worbs, Mario; Helmbrecht, Stephan; Jakobi, Annika; Just, Uwe; Krause, Mechthild; Baumann, Michael; Enghardt, Wolfgang; Lühr, Armin
2015-09-01
To guarantee equal access to optimal radiotherapy, a concept of patient assignment to photon or particle radiotherapy using remote treatment plan exchange and comparison - ReCompare - was proposed. We demonstrate the implementation of this concept and present its clinical applicability. The ReCompare concept was implemented using a client-server based software solution. A clinical workflow for the remote treatment plan exchange and comparison was defined. The steps required by the user and performed by the software for a complete plan transfer were described and an additional module for dose-response modeling was added. The ReCompare software was successfully tested in cooperation with three external partner clinics and worked meeting all required specifications. It was compatible with several standard treatment planning systems, ensured patient data protection, and integrated in the clinical workflow. The ReCompare software can be applied to support non-particle radiotherapy institutions with the patient-specific treatment decision on the optimal irradiation modality by remote treatment plan exchange and comparison. Copyright © 2015. Published by Elsevier GmbH.
Flexible resources for quantum metrology
NASA Astrophysics Data System (ADS)
Friis, Nicolai; Orsucci, Davide; Skotiniotis, Michalis; Sekatski, Pavel; Dunjko, Vedran; Briegel, Hans J.; Dür, Wolfgang
2017-06-01
Quantum metrology offers a quadratic advantage over classical approaches to parameter estimation problems by utilising entanglement and nonclassicality. However, the hurdle of actually implementing the necessary quantum probe states and measurements, which vary drastically for different metrological scenarios, is usually not taken into account. We show that for a wide range of tasks in metrology, 2D cluster states (a particular family of states useful for measurement-based quantum computation) can serve as flexible resources that allow one to efficiently prepare any required state for sensing, and perform appropriate (entangled) measurements using only single qubit operations. Crucially, the overhead in the number of qubits is less than quadratic, thus preserving the quantum scaling advantage. This is ensured by using a compression to a logarithmically sized space that contains all relevant information for sensing. We specifically demonstrate how our method can be used to obtain optimal scaling for phase and frequency estimation in local estimation problems, as well as for the Bayesian equivalents with Gaussian priors of varying widths. Furthermore, we show that in the paradigmatic case of local phase estimation 1D cluster states are sufficient for optimal state preparation and measurement.
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.
1994-01-01
This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.
Accelerating atomic structure search with cluster regularization
NASA Astrophysics Data System (ADS)
Sørensen, K. H.; Jørgensen, M. S.; Bruix, A.; Hammer, B.
2018-06-01
We present a method for accelerating the global structure optimization of atomic compounds. The method is demonstrated to speed up the finding of the anatase TiO2(001)-(1 × 4) surface reconstruction within a density functional tight-binding theory framework using an evolutionary algorithm. As a key element of the method, we use unsupervised machine learning techniques to categorize atoms present in a diverse set of partially disordered surface structures into clusters of atoms having similar local atomic environments. Analysis of more than 1000 different structures shows that the total energy of the structures correlates with the summed distances of the atomic environments to their respective cluster centers in feature space, where the sum runs over all atoms in each structure. Our method is formulated as a gradient based minimization of this summed cluster distance for a given structure and alternates with a standard gradient based energy minimization. While the latter minimization ensures local relaxation within a given energy basin, the former enables escapes from meta-stable basins and hence increases the overall performance of the global optimization.
Design of off-statistics axial-flow fans by means of vortex law optimization
NASA Astrophysics Data System (ADS)
Lazari, Andrea; Cattanei, Andrea
2014-12-01
Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.
Eco-friendly copper recovery process from waste printed circuit boards using Fe³⁺/Fe²⁺ redox system.
Fogarasi, Szabolcs; Imre-Lucaci, Florica; Egedy, Attila; Imre-Lucaci, Árpád; Ilea, Petru
2015-06-01
The present study aimed at developing an original and environmentally friendly process for the recovery of copper from waste printed circuit boards (WPCBs) by chemical dissolution with Fe(3+) combined with the simultaneous electrowinning of copper and oxidant regeneration. The recovery of copper was achieved in an original set-up consisting of a three chamber electrochemical reactor (ER) connected in series with a chemical reactor (CR) equipped with a perforated rotating drum. Several experiments were performed in order to identify the optimal flow rate for the dissolution of copper in the CR and to ensure the lowest energy consumption for copper electrodeposition in the ER. The optimal hydrodynamic conditions were provided at 400 mL/min, leading to the 75% dissolution of metals and to a low specific energy consumption of 1.59 kW h/kg Cu for the electrodeposition process. In most experiments, the copper content of the obtained cathodic deposits was over 99.9%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quality improvement in the use of medications through a drug use evaluation service.
Stevenson, J G; Bakst, C M; Zaran, F K; Rybak, M J; Smolarek, R T; Alexander, M R
1992-10-01
Continuous quality improvement methods have the potential to improve processes that cross several disciplines. The medication system is one in which coordination of activities between physicians, pharmacists, and nurses is essential for optimal therapy to occur. DUE services can play an important role in helping to ensure that patients receive high-quality pharmaceutical care. It is necessary for pharmacy managers to review the structure, goals, and outcomes of their DUE programs to ensure that they are consistent with a philosophy of continuous improvement in the quality of drug therapy.
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
NASA Astrophysics Data System (ADS)
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Work and Performance Pay Program § 545.31 Training. The Warden shall ensure that staff receive training on their roles in, and on the operation of, the work and performance pay program. The Warden shall also ensure that the inmate population is informed of the work and performance pay program, and of the...
Practical Approaches to Quality Improvement for Radiologists.
Kelly, Aine Marie; Cronin, Paul
2015-10-01
Continuous quality improvement is a fundamental attribute of high-performing health care systems. Quality improvement is an essential component of health care, with the current emphasis on adding value. It is also a regulatory requirement, with reimbursements increasingly being linked to practice performance metrics. Practice quality improvement efforts must be demonstrated for credentialing purposes and for certification of radiologists in practice. Continuous quality improvement must occur for radiologists to remain competitive in an increasingly diverse health care market. This review provides an introduction to the main approaches available to undertake practice quality improvement, which will be useful for busy radiologists. Quality improvement plays multiple roles in radiology services, including ensuring and improving patient safety, providing a framework for implementing and improving processes to increase efficiency and reduce waste, analyzing and depicting performance data, monitoring performance and implementing change, enabling personnel assessment and development through continued education, and optimizing customer service and patient outcomes. The quality improvement approaches and underlying principles overlap, which is not surprising given that they all align with good patient care. The application of these principles to radiology practices not only benefits patients but also enhances practice performance through promotion of teamwork and achievement of goals. © RSNA, 2015.
NASA Astrophysics Data System (ADS)
Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael
2017-05-01
Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.
Robust optimisation-based microgrid scheduling with islanding constraints
Liu, Guodong; Starke, Michael; Xiao, Bailu; ...
2017-02-17
This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less
Autopilot regulation for the Linac4 H- ion source
NASA Astrophysics Data System (ADS)
Voulgarakis, G.; Lettry, J.; Mattei, S.; Lefort, B.; Costa, V. J. Correia
2017-08-01
Linac4 is a 160 MeV H- linear accelerator part of the upgrade of the LHC injector chain. Its cesiated surface H- source is designed to provide a beam intensity of 40-50mA. It is operated with periodical Cs-injection at typically 30 days intervals [1] and this implies that the beam parameters will slowly evolve during operation. Autopilot is a control software package extending CERN developed Inspector framework. The aim of Autopilot is to automatize the mandatory optimization and cesiation processes and to derive performance indicators, thus keeping human intervention minimal. Autopilot has been developed by capitalizing on the experience from manually operating the source. It comprises various algorithms running in real-time, which have been devised to: • Optimize the ion source performance by regulation of H2 injection, RF power and frequency. • Describe the performance of the source with performance indicators, which can be easily understood by operators. • Identify failures, try to recover the nominal operation and send warning in case of deviation from nominal operation. • Make the performance indicators remotely available through Web pages.Autopilot is at the same level of hierarchy as an operator, in the CERN infrastructure. This allows the combination of all ion source devices, providing the required flexibility. Autopilot is executed in a dedicated server, ensuring unique and centralized control, yet allowing multiple operators to interact at runtime, always coordinating between them. Autopilot aims at flexibility, adaptability, portability and scalability, and can be extended to other components of CERN's accelerators. In this paper, a detailed description of the Autopilot algorithms is presented, along with first results of operating the Linac4 H- Ion Source with Autopilot.
Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Whorton, M. S.
2007-01-01
Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.
Designing Flood Management Systems for Joint Economic and Ecological Robustness
NASA Astrophysics Data System (ADS)
Spence, C. M.; Grantham, T.; Brown, C. M.; Poff, N. L.
2015-12-01
Freshwater ecosystems across the United States are threatened by hydrologic change caused by water management operations and non-stationary climate trends. Nonstationary hydrology also threatens flood management systems' performance. Ecosystem managers and flood risk managers need tools to design systems that achieve flood risk reduction objectives while sustaining ecosystem functions and services in an uncertain hydrologic future. Robust optimization is used in water resources engineering to guide system design under climate change uncertainty. Using principles introduced by Eco-Engineering Decision Scaling (EEDS), we extend robust optimization techniques to design flood management systems that meet both economic and ecological goals simultaneously across a broad range of future climate conditions. We use three alternative robustness indices to identify flood risk management solutions that preserve critical ecosystem functions in a case study from the Iowa River, where recent severe flooding has tested the limits of the existing flood management system. We seek design modifications to the system that both reduce expected cost of flood damage while increasing ecologically beneficial inundation of riparian floodplains across a wide range of plausible climate futures. The first robustness index measures robustness as the fraction of potential climate scenarios in which both engineering and ecological performance goals are met, implicitly weighting each climate scenario equally. The second index builds on the first by using climate projections to weight each climate scenario, prioritizing acceptable performance in climate scenarios most consistent with climate projections. The last index measures robustness as mean performance across all climate scenarios, but penalizes scenarios with worse performance than average, rewarding consistency. Results stemming from alternate robustness indices reflect implicit assumptions about attitudes toward risk and reveal the tradeoffs between using structural and non-structural flood management strategies to ensure economic and ecological robustness.
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd
2018-04-01
Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
Shahbaz Mohammadi, Hamid; Mostafavi, Seyede Samaneh; Soleimani, Saeideh; Bozorgian, Sajad; Pooraskari, Maryam; Kianmehr, Anvarsadat
2015-04-01
Oxidoreductases are an important family of enzymes that are used in many biotechnological processes. An experimental design was applied to optimize partition and purification of two recombinant oxidoreductases, glucose dehydrogenase (GDH) from Bacillus subtilis and d-galactose dehydrogenase (GalDH) from Pseudomonas fluorescens AK92 in aqueous two-phase systems (ATPS). Response surface methodology (RSM) with a central composite rotatable design (CCRD) was performed to optimize critical factors like polyethylene glycol (PEG) concentration, concentration of salt and pH value. The best partitioning conditions was achieved in an ATPS composed of 12% PEG-6000, 15% K2HPO4 with pH 7.5 at 25°C, which ensured partition coefficient (KE) of 66.6 and 45.7 for GDH and GalDH, respectively. Under these experimental conditions, the activity of GDH and GalDH was 569.5U/ml and 673.7U/ml, respectively. It was found that these enzymes preferentially partitioned into the top PEG-rich phase and appeared as single bands on SDS-PAGE gel. Meanwhile the validity of the response model was confirmed by a good agreement between predicted and experimental results. Collectively, according to the obtained data it can be inferred that the ATPS optimization using RSM approach can be applied for recovery and purification of any enzyme from oxidoreductase family. Copyright © 2015 Elsevier Inc. All rights reserved.
High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems
NASA Technical Reports Server (NTRS)
Kolano, Paul Z.; Ciotti, Robert B.
2012-01-01
Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.
NASA Technical Reports Server (NTRS)
Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.
1999-01-01
This paper presents viewgraphs on Plastic Encapsulated Microcircuits (PEMs). Different approaches are addressed to ensure good performance and reliability of PEMs. The topics include: 1) Mitigating Risk; and 2) Program results.
Economic optimization of natural hazard protection - conceptual study of existing approaches
NASA Astrophysics Data System (ADS)
Spackova, Olga; Straub, Daniel
2013-04-01
Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Optimality of affine control system of several species in competition on a sequential batch reactor
NASA Astrophysics Data System (ADS)
Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.
2014-09-01
In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
A multiple functional connector for high-resolution optical satellites
NASA Astrophysics Data System (ADS)
She, Fengke; Zheng, Gangtie
2017-11-01
For earth observation satellites, perturbations from actuators, such as CMGs and momentum wheels, and thermal loadings from support structures often have significant impact on the image quality of an optical. Therefore, vibration isolators and thermal deformation releasing devices nowadays often become important parts of an image satellite. However, all these devices will weak the connection stiffness between the optical instrument and the satellite bus structure. This will cause concern of the attitude control system design for worrying about possible negative effect on the attitude control. Therefore, a connection design satisfying all three requirements is a challenge of advanced image satellites. Chinese scientists have proposed a large aperture high-resolution satellite for earth observation. To meet all these requirements and ensure image quality, specified multiple function connectors are designed to meet these challenging requirements, which are: isolating vibration, releasing thermal deformation and ensuring whole satellite dynamic properties [1]. In this paper, a parallel spring guide flexure is developed for both vibration isolation and thermal deformation releasing. The stiffness of the flexure is designed to meet the vibration isolation requirement. To attenuate vibration, and more importantly to satisfy the stability requirement of the attitude control system, metal damping, which has many merits for space applications, are applied in this connecter to provide a high damping ratio and nonlinear stiffness. The capability of the connecter for vibration isolation and attenuation is validated through numerical simulation and experiments. Connecter parameter optimization is also conducted to meet both requirements of thermal deformation releasing and attitude control. Analysis results show that the in-orbit attitude control requirement is satisfied while the thermal releasing performance is optimized. The design methods and analysis results are also provided in the present paper.
Lagacé, François; Foucher, Delphine; Surette, Céline; Clarisse, Olivier
2017-05-15
Radium (Ra) at environmental relevant levels in natural waters was determined by ICP-MS after an off-line pre-concentration procedure. The latter consisted of Ra selective elution from potential interfering elements (i.e. other alkaline earth cations: Ba 2+ , Sr 2+ , Ca 2+ , Mg 2+ ) on a series of two different ion exchange resins (AG50W-X8 and Sr-resin). The overall analytical method was optimized according to the instrumental performance, the volume of water sample loaded on resins, and the sample salinity. Longer acquisition time (up to 150 s) was required to ensure stable measurement of Ra by ICP-MS at ultra trace level (1.0pgL -1 ). For a synthetic groundwater spiked with Ra at 10.0pgL -1 , the analytical procedure demonstrated efficient separation of the analyte from its potential interfering elements and a complete recovery, independent of the sample volume tested from 10 up to 100mL. For synthetic seawater spiked at a level of 10.0pgL -1 of Ra, the total load of salts on the two resins should not exceed 0.35g in order to ensure a complete separation and recovery of Ra. The method was validated on natural waters (i.e. groundwater, freshwater and seawater samples) spiked with Ra at different levels (0.0, 0.5, 1.0 and 5.0pgL -1 ). Absolute Ra detection limits were determined at 0.020pgL -1 (0.73mBqL -1 ) and 0.12pgL -1 (4.4mBqL -1 ) respectively for 60.0mL of freshwater sample and for 10.0mL of seawater. Copyright © 2017 Elsevier B.V. All rights reserved.
Geometry and gravity influences on strength capability
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert P.; Klute, Glenn K.
1994-01-01
Strength, defined as the capability of an individual to produce an external force, is one of the most important determining characteristics of human performance. Knowledge of strength capabilities of a group of individuals can be applied to designing equipment and workplaces, planning procedures and tasks, and training individuals. In the manned space program, with the high risk and cost associated with spaceflight, information pertaining to human performance is important to ensuring mission success and safety. Knowledge of individual's strength capabilities in weightlessness is of interest within many areas of NASA, including workplace design, tool development, and mission planning. The weightless environment of space places the human body in a completely different context. Astronauts perform a variety of manual tasks while in orbit. Their ability to perform these tasks is partly determined by their strength capability as demanded by that particular task. Thus, an important step in task planning, development, and evaluation is to determine the ability of the humans performing it. This can be accomplished by utilizing quantitative techniques to develop a database of human strength capabilities in weightlessness. Furthermore, if strength characteristics are known, equipment and tools can be built to optimize the operators' performance. This study examined strength in performing a simple task, specifically, using a tool to apply a torque to a fixture.
Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne
2017-09-01
Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Theory of sampling: four critical success factors before analysis.
Wagner, Claas; Esbensen, Kim H
2015-01-01
Food and feed materials characterization, risk assessment, and safety evaluations can only be ensured if QC measures are based on valid analytical data, stemming from representative samples. The Theory of Sampling (TOS) is the only comprehensive theoretical framework that fully defines all requirements to ensure sampling correctness and representativity, and to provide the guiding principles for sampling in practice. TOS also defines the concept of material heterogeneity and its impact on the sampling process, including the effects from all potential sampling errors. TOS's primary task is to eliminate bias-generating errors and to minimize sampling variability. Quantitative measures are provided to characterize material heterogeneity, on which an optimal sampling strategy should be based. Four critical success factors preceding analysis to ensure a representative sampling process are presented here.
Content-aware photo collage using circle packing.
Yu, Zongqiao; Lu, Lin; Guo, Yanwen; Fan, Rongfei; Liu, Mingming; Wang, Wenping
2014-02-01
In this paper, we present a novel approach for automatically creating the photo collage that assembles the interest regions of a given group of images naturally. Previous methods on photo collage are generally built upon a well-defined optimization framework, which computes all the geometric parameters and layer indices for input photos on the given canvas by optimizing a unified objective function. The complex nonlinear form of optimization function limits their scalability and efficiency. From the geometric point of view, we recast the generation of collage as a region partition problem such that each image is displayed in its corresponding region partitioned from the canvas. The core of this is an efficient power-diagram-based circle packing algorithm that arranges a series of circles assigned to input photos compactly in the given canvas. To favor important photos, the circles are associated with image importances determined by an image ranking process. A heuristic search process is developed to ensure that salient information of each photo is displayed in the polygonal area resulting from circle packing. With our new formulation, each factor influencing the state of a photo is optimized in an independent stage, and computation of the optimal states for neighboring photos are completely decoupled. This improves the scalability of collage results and ensures their diversity. We also devise a saliency-based image fusion scheme to generate seamless compositive collage. Our approach can generate the collages on nonrectangular canvases and supports interactive collage that allows the user to refine collage results according to his/her personal preferences. We conduct extensive experiments and show the superiority of our algorithm by comparing against previous methods.
Optimizing Oxygenation in the Mechanically Ventilated Patient: Nursing Practice Implications.
Barton, Glenn; Vanderspank-Wright, Brandi; Shea, Jacqueline
2016-12-01
Critical care nurses constitute front-line care provision for patients in the intensive care unit (ICU). Hypoxemic respiratory compromise/failure is a primary reason that patients require ICU admission and mechanical ventilation. Critical care nurses must possess advanced knowledge, skill, and judgment when caring for these patients to ensure that interventions aimed at optimizing oxygenation are both effective and safe. This article discusses fundamental aspects of respiratory physiology and clinical indices used to describe oxygenation status. Key nursing interventions including patient assessment, positioning, pharmacology, and managing hemodynamic parameters are discussed, emphasizing their effects toward mitigating ventilation-perfusion mismatch and optimizing oxygenation. Copyright © 2016 Elsevier Inc. All rights reserved.
Integer programming model for optimizing bus timetable using genetic algorithm
NASA Astrophysics Data System (ADS)
Wihartiko, F. D.; Buono, A.; Silalahi, B. P.
2017-01-01
Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Energy balance framework for Net Zero Energy buildings
Approaching a Net Zero Energy (NZE) building goal based on current definitions is flawed for two principal reasons - they only deal with energy quantities required for operations, and they do not establish a threshold, which ensures that buildings are optimized for reduced consum...
Committee Report: Metrics & Methods for MF/UF System Optimization
After a membrane filtration (i.e., microfiltration (MF) and ultrafiltration (UF)) system is designed, installed, and commissioned, it is essential that the plant is well-maintained in order to proactively identify potential design or equipment problems and ensure its proper opera...
Addressing safety through evaluation and optimization of permeable friction course mixtures.
DOT National Transportation Integrated Search
2010-01-01
Permeable friction course (PFC) mixtures are a special type of hot mix asphalt characterized by a : high total air voids content to guarantee proper functionality and stone-on-stone contact of the coarse : aggregate fraction to ensure adequate mixtur...
Packing of Fruit Fly Parasitoids for Augmentative Releases
Montoya, Pablo; Cancino, Jorge; Ruiz, Lía
2012-01-01
The successful application of Augmentative Biological Control (ABC) to control pest fruit flies (Diptera: Tephritidae) confronts two fundamental requirements: (1) the establishment of efficient mass rearing procedures for the species to be released, and (2) the development of methodologies for the packing and release of parasitoids that permit a uniform distribution and their optimal field performance under an area-wide approach. Parasitoid distributions have been performed by ground and by air with moderate results; both options face challenges that remain to be addressed. Different devices and strategies have been used for these purposes, including paper bags and the chilled adult technique, both of which are commonly used when releasing sterile flies. However, insect parasitoids have morphological and behavioral characteristics that render the application of such methodologies suboptimal. In this paper, we discuss an alternate strategy for the augmentative release of parasitoids and describe packing conditions that favor the rearing and emergence of adult parasitoids for increased field performance. We conclude that the use of ABC, including the packaging of parasitoids, requires ongoing development to ensure that this technology remains a viable and effective control technique for pest fruit flies. PMID:26466634
Behavioral and fMRI evidence of the differing cognitive load of domain-specific assessments.
Howard, S J; Burianová, H; Ehrich, J; Kervin, L; Calleia, A; Barkus, E; Carmody, J; Humphry, S
2015-06-25
Standards-referenced educational reform has increased the prevalence of standardized testing; however, whether these tests accurately measure students' competencies has been questioned. This may be due to domain-specific assessments placing a differing domain-general cognitive load on test-takers. To investigate this possibility, functional magnetic resonance imaging (fMRI) was used to identify and quantify the neural correlates of performance on current, international standardized methods of spelling assessment. Out-of-scanner testing was used to further examine differences in assessment results. Results provide converging evidence that: (a) the spelling assessments differed in the cognitive load placed on test-takers; (b) performance decreased with increasing cognitive load of the assessment; and (c) brain regions associated with working memory were more highly activated during performance of assessments that were higher in cognitive load. These findings suggest that assessment design should optimize the cognitive load placed on test-takers, to ensure students' results are an accurate reflection of their true levels of competency. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Influence of dimension parameters of the gravity heat pipe on the thermal performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosa, Ľuboš, E-mail: lubos.kosa@fstroj.uniza.sk; Nemec, Patrik, E-mail: patrik.nemec@fstroj.uniza.sk; Jobb, Marián, E-mail: marian.jobb@fstroj.uniza.sk
Currently the problem with the increasing number of electronic devices is a problem with the outlet Joule heating. Joule heating, also known as ohmic heating and resistive heating, is the process by which the passage of an electric current through a conductor releases heat. Perfect dustproof cooling of electronic components ensures longer life of the equipment. One of more alternatives of heat transfer without the using of mechanical equipment is the use of the heat pipe. Heat pipes are easy to manufacture and maintenance of low input investment cost. The advantage of using the heat pipe is its use inmore » hermetic closed electronic device which is separated exchange of air between the device and the environment. This experiment deals with the influence of changes in the working tube diameter and changing the working fluid on performance parameters. Changing the working fluid and the tube diameter changes the thermal performance of the heat pipe. The result of this paper is finding the optimal diameter with ideal working substance for the greatest heat transfer for 1cm{sup 2} sectional area tube.« less
Surface plasmon resonance immunoassay analysis of pituitary hormones in urine and serum samples.
Treviño, Juan; Calle, Ana; Rodríguez-Frade, José Miguel; Mellado, Mario; Lechuga, Laura M
2009-05-01
Direct determination of four pituitary peptide hormones: human thyroid stimulating hormone (hTSH), growth hormone (hGH), follicle stimulating hormone (hFSH), and luteinizing hormone (hLH) has been carried out using a portable surface plasmon resonance (SPR) immunosensor. A commercial SPR biosensor was employed. The immobilization of the hormones was optimized and monoclonal antibodies were selected in order to obtain the best sensor performance. Assay parameters as running buffer and regeneration solution composition or antibody concentration were adjusted to achieve a sensitive analyte detection. The performance of the assays was assessed in buffer solution, serum and urine, showing sensitivity in the range from 1 to 6 ng/mL. The covalent attachment of the hormones ensured the stability of the SPR signal through repeated use in up to 100 consecutive assay cycles. Mean intra- and inter-day coefficients of variation were all <7%, while batch-assay variability using different sensor surfaces was <5%. Taking account both the excellent reutilization performance and the outstanding reproducibility, this SPR immunoassay method turns on a highly reliable tool for endocrine monitoring in laboratory and point-of-care (POC) settings.
Achieving optimum diffraction based overlay performance
NASA Astrophysics Data System (ADS)
Leray, Philippe; Laidler, David; Cheng, Shaunee; Coogans, Martyn; Fuchs, Andreas; Ponomarenko, Mariya; van der Schaar, Maurits; Vanoppen, Peter
2010-03-01
Diffraction Based Overlay (DBO) metrology has been shown to have significantly reduced Total Measurement Uncertainty (TMU) compared to Image Based Overlay (IBO), primarily due to having no measurable Tool Induced Shift (TIS). However, the advantages of having no measurable TIS can be outweighed by increased susceptibility to WIS (Wafer Induced Shift) caused by target damage, process non-uniformities and variations. The path to optimum DBO performance lies in having well characterized metrology targets, which are insensitive to process non-uniformities and variations, in combination with optimized recipes which take advantage of advanced DBO designs. In this work we examine the impact of different degrees of process non-uniformity and target damage on DBO measurement gratings and study their impact on overlay measurement accuracy and precision. Multiple wavelength and dual polarization scatterometry are used to characterize the DBO design performance over the range of process variation. In conclusion, we describe the robustness of DBO metrology to target damage and show how to exploit the measurement capability of a multiple wavelength, dual polarization scatterometry tool to ensure the required measurement accuracy for current and future technology nodes.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Refractive laser beam shaping by means of a functional differential equation based design approach.
Duerr, Fabian; Thienpont, Hugo
2014-04-07
Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.
Cross-coupled control for all-terrain rovers.
Reina, Giulio
2013-01-08
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Sevast'ianova, E V; Martynova, L P; Barilo, V N; Golyshevskaia, V I; Shul'gina, M V
2009-01-01
By taking into account the present requirements for the equipping of the laboratories, the authors have drawn up the minimum standard list of equipment, as well as a list of additional equipment for the specialized bacteriological laboratory of a tuberculosis-controlling institution, which performs microbiological studies for the diagnosis and control of chemotherapy for tuberculosis. The specifications and characteristics of the baric types of equipment used to fit out the laboratories under the present conditions are described. Equipping the laboratories in accordance with the draw-up lists is shown to ensure a qualitative, effective, and safe work. Recommendations on how to supply the laboratories with equipment, to make the optimal choice, and to use consumables for tests are given.