SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The optimum hardware miniaturization level with the lowest cost impact for space biology hardware was determined. Space biology hardware and/or components/subassemblies/assemblies which are the most likely candidates for application of miniaturization are to be defined and relative cost impacts of such miniaturization are to be analyzed. A mathematical or statistical analysis method with the capability to support development of parametric cost analysis impacts for levels of production design miniaturization are provided.
Space biology initiative program definition review. Trade study 4: Design modularity and commonality
NASA Technical Reports Server (NTRS)
Jackson, L. Neal; Crenshaw, John, Sr.; Davidson, William L.; Herbert, Frank J.; Bilodeau, James W.; Stoval, J. Michael; Sutton, Terry
1989-01-01
The relative cost impacts (up or down) of developing Space Biology hardware using design modularity and commonality is studied. Recommendations for how the hardware development should be accomplished to meet optimum design modularity requirements for Life Science investigation hardware will be provided. In addition, the relative cost impacts of implementing commonality of hardware for all Space Biology hardware are defined. Cost analysis and supporting recommendations for levels of modularity and commonality are presented. A mathematical or statistical cost analysis method with the capability to support development of production design modularity and commonality impacts to parametric cost analysis is provided.
Simulate what is measured: next steps towards predictive simulations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Bussmann, Michael; Kluge, Thomas; Debus, Alexander; Hübl, Axel; Garten, Marco; Zacharias, Malte; Vorberger, Jan; Pausch, Richard; Widera, René; Schramm, Ulrich; Cowan, Thomas E.; Irman, Arie; Zeil, Karl; Kraus, Dominik
2017-05-01
Simulations of laser matter interaction at extreme intensities that have predictive power are nowadays in reach when considering codes that make optimum use of high performance compute architectures. Nevertheless, this is mostly true for very specific settings where model parameters are very well known from experiment and the underlying plasma dynamics is governed by Maxwell's equations solely. When including atomic effects, prepulse influences, radiation reaction and other physical phenomena things look different. Not only is it harder to evaluate the sensitivity of the simulation result on the variation of the various model parameters but numerical models are less well tested and their combination can lead to subtle side effects that influence the simulation outcome. We propose to make optimum use of future compute hardware to compute statistical and systematic errors rather than just find the mots optimum set of parameters fitting an experiment. This requires to include experimental uncertainties which is a challenge to current state of the art techniques. Moreover, it demands better comparison to experiments as inclusion of simulating the diagnostic's response becomes important. We strongly advocate the use of open standards for finding interoperability between codes for comparison studies, building complete tool chains for simulating laser matter experiments from start to end.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Analysis and test of a breadboard cryogenic hydrogen/Freon heat exchanger
NASA Technical Reports Server (NTRS)
Desjardins, L. F.; Hooper, J.
1973-01-01
System studies required to verify a tube-in-tube cryogenic heat exchanger as optimum for the space shuttle mission are described. Design of the optimum configuration, which could be fabricated from commercially available hardware, is discussed. Finally, testing of the proposed configuration with supercritical hydrogen and Freon 21 is discussed and results are compared with thermal and dynamic analysis.
Sensitivity study and parameter optimization of OCD tool for 14nm finFET process
NASA Astrophysics Data System (ADS)
Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping
2016-03-01
Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Quantum Heterogeneous Computing for Satellite Positioning Optimization
NASA Astrophysics Data System (ADS)
Bass, G.; Kumar, V.; Dulny, J., III
2016-12-01
Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.
Chemical calculations on Cray computers
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Schwenke, David W.
1989-01-01
The influence of recent developments in supercomputing on computational chemistry is discussed with particular reference to Cray computers and their pipelined vector/limited parallel architectures. After reviewing Cray hardware and software the performance of different elementary program structures are examined, and effective methods for improving program performance are outlined. The computational strategies appropriate for obtaining optimum performance in applications to quantum chemistry and dynamics are discussed. Finally, some discussion is given of new developments and future hardware and software improvements.
Hardware-Based Non-Optimum Factors for Launch Vehicle Structural Design
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Cerro, Jeffrey A.
2010-01-01
During aerospace vehicle conceptual and preliminary design, empirical non-optimum factors are typically applied to predicted structural component weights to account for undefined manufacturing and design details. Non-optimum factors are developed here for 32 aluminum-lithium 2195 orthogrid panels comprising the liquid hydrogen tank barrel of the Space Shuttle External Tank using measured panel weights and manufacturing drawings. Minimum values for skin thickness, axial and circumferential blade stiffener thickness and spacing, and overall panel thickness are used to estimate individual panel weights. Panel non-optimum factors computed using a coarse weights model range from 1.21 to 1.77, and a refined weights model (including weld lands and skin and stiffener transition details) yields non-optimum factors of between 1.02 and 1.54. Acreage panels have an average 1.24 non-optimum factor using the coarse model, and 1.03 with the refined version. The observed consistency of these acreage non-optimum factors suggests that relatively simple models can be used to accurately predict large structural component weights for future launch vehicles.
NASA Technical Reports Server (NTRS)
1973-01-01
Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.
Orbiter Auxiliary Power Unit Flight Support Plan
NASA Technical Reports Server (NTRS)
Guirl, Robert; Munroe, James; Scott, Walter
1990-01-01
This paper discussed the development of an integrated Orbiter Auxiliary Power Unit (APU) and Improved APU (IAPU) Flight Suuport Plan. The plan identifies hardware requirements for continued support of flight activities for the Space Shuttle Orbiter fleet. Each Orbiter vehicle has three APUs that provide power to the hydraulic system for flight control surface actuation, engine gimbaling, landing gear deployment, braking, and steering. The APUs contain hardware that has been found over the course of development and flight history to have operating time and on-vehicle exposure time limits. These APUs will be replaced by IAPUs with enhanced operating lives on a vehicle-by-vehicle basis during scheduled Orbiter modification periods. This Flight Support Plan is used by program management, engineering, logistics, contracts, and procurement groups to establish optimum use of available hardware and replacement quantities and delivery requirements for APUs until vehicle modifications and incorporation of IAPUs. Changes to the flight manifest and program delays are evaluated relative to their impact on hardware availability.
NASA Technical Reports Server (NTRS)
Dodson, D. W.; Shields, N. L., Jr.
1979-01-01
Individual Spacelab experiments are responsible for developing their CRT display formats and interactive command scenarios for payload crew monitoring and control of experiment operations via the Spacelab Data Display System (DDS). In order to enhance crew training and flight operations, it was important to establish some standardization of the crew/experiment interface among different experiments by providing standard methods and techniques for data presentation and experiment commanding via the DDS. In order to establish optimum usage guidelines for the Spacelab DDS, the capabilities and limitations of the hardware and Experiment Computer Operating System design had to be considered. Since the operating system software and hardware design had already been established, the Display and Command Usage Guidelines were constrained to the capabilities of the existing system design. Empirical evaluations were conducted on a DDS simulator to determine optimum operator/system interface utilization of the system capabilities. Display parameters such as information location, display density, data organization, status presentation and dynamic update effects were evaluated in terms of response times and error rates.
Automated installation methods for photovoltaic arrays
NASA Astrophysics Data System (ADS)
Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.
1982-11-01
Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.
Xiao, Hao; Gao, Hengbo; Zheng, Tuokang; Zhao, Jianhui; Tian, Yingping
2016-04-01
This analysis critically compares publications discussing complications and functional outcomes of plate fixation (PF) versus intramedullary fixation (IF) for midshaft clavicle fractures. Relevant studies published between January 1990 and October 2014, without language restrictions, were identified in database searches of PubMed®, Medline®, Embase and the Chinese National Knowledge Infrastructure (CNKI). Studies that compared postoperative complications and functional outcomes between PF and IF for midshaft clavicle fractures, and provided sufficient data for analysis, were included in this meta-analysis. After strict evaluation, 12 studies were included in this meta-analysis. Studies encompassed 462 participants in the PF group and 440 in the IF group. Study participants were followed up for ≥1 year. Outcomes were superior with IF compared with PF in terms of shoulder constant score at 6-month follow-up, fewer symptomatic hardware complications, lower rate of refracture after hardware removal and less hypertrophic scarring. In other aspects, such as functional recovery at 12-months and 24-months, Disability of Arm, Shoulder and Hand (DASH) questionnaire results at 12-month follow-up, shoulder motion range, rates of superficial infection, temporary brachial plexus lesion, nonunion, malunion, delayed union, implant failure and need for major revision, both techniques were similar. Findings of this meta-analysis suggest that, in many respects, IF was superior to PF for the management of midshaft clavicle fractures. This finding could aid surgeons in making decisions on the optimum internal fixation pattern for midshaft clavicular fractures. © The Author(s) 2016.
STS propellant scavenging systems study. Part 2, volume 1: Executive summary and study results
NASA Technical Reports Server (NTRS)
Williams, Frank L.
1987-01-01
The major objective of the STS Propellant Scavenging Study is to define the hardware, operations, and life cycle costs for recovery of unused Space Transportation System propellants. Earlier phases were concerned exclusively with the recovery of cryogenic propellants from the main propulsion system of the manned STS. The phase of the study covered by this report (Part II Extension) modified the objectives to include cryogenic propellants delivered to orbit by the unmanned cargo vehicle. The Part II Extension had the following objectives: (1) predict OTV propellant requirements from 1995 to 2010; investigate scavenging/transport tank reuse; determine optimum tank sizing and arrangement; and develop hardware concepts for tanks.
Research in software allocation for advanced manned mission communications and tracking systems
NASA Technical Reports Server (NTRS)
Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone
1990-01-01
An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.
ERIC Educational Resources Information Center
Lopeman, Holly
A survey of computer hardware and software access, network familiarity, and systems use was conducted to determine the optimum placement of two newly developed electronic Interlibrary Loan (ILL) forms at the Ohio State University Health Sciences Library. A sample of 205 ILL users were mailed a questionnaire, with a resultant 72% (n=148) response…
NASA Technical Reports Server (NTRS)
1982-01-01
Design and test data for packaging, deploying, and assembling structures for near term space platform systems, were provided by testing light type hardware in the Neutral Buoyancy Simulator. An optimum or near optimum structural configuration for varying degrees of deployment utilizing different levels of EVA and RMS was achieved. The design of joints and connectors and their lock/release mechanisms were refined to improve performance and operational convenience. The incorporation of utilities into structural modules to determine their effects on packaging and deployment was evaluated. By simulation tests, data was obtained for stowage, deployment, and assembly of the final structural system design to determine construction timelines, and evaluate system functioning and techniques.
Xiao, Hao; Gao, Hengbo; Zheng, Tuokang; Zhao, Jianhui
2016-01-01
Objective This analysis critically compares publications discussing complications and functional outcomes of plate fixation (PF) versus intramedullary fixation (IF) for midshaft clavicle fractures. Methods Relevant studies published between January 1990 and October 2014, without language restrictions, were identified in database searches of PubMed®, Medline®, Embase and the Chinese National Knowledge Infrastructure (CNKI). Studies that compared postoperative complications and functional outcomes between PF and IF for midshaft clavicle fractures, and provided sufficient data for analysis, were included in this meta-analysis. Results After strict evaluation, 12 studies were included in this meta-analysis. Studies encompassed 462 participants in the PF group and 440 in the IF group. Study participants were followed up for ≥1 year. Outcomes were superior with IF compared with PF in terms of shoulder constant score at 6-month follow-up, fewer symptomatic hardware complications, lower rate of refracture after hardware removal and less hypertrophic scarring. In other aspects, such as functional recovery at 12-months and 24-months, Disability of Arm, Shoulder and Hand (DASH) questionnaire results at 12-month follow-up, shoulder motion range, rates of superficial infection, temporary brachial plexus lesion, nonunion, malunion, delayed union, implant failure and need for major revision, both techniques were similar. Conclusions Findings of this meta-analysis suggest that, in many respects, IF was superior to PF for the management of midshaft clavicle fractures. This finding could aid surgeons in making decisions on the optimum internal fixation pattern for midshaft clavicular fractures. PMID:26880791
NASA Technical Reports Server (NTRS)
1985-01-01
Fundamentally, the volumes of the oxidizer and fuel propellant scavenged from the orbiter and external tank determine the size and weight of the scavenging system. The optimization of system dimensions and weights is stimulated by the requirement to minimize the use of partial length of the orbiter payload bay. Thus, the cost estimates begin with weights established for the optimum design. Both the design, development, test, and evaluation and theoretical first unit hardware production costs are estimated from parametric cost weight scaling relations for four subsystems. For cryogenic propellants, the widely differing characteristics of the oxidizer and the fuel lead to two separate tank subsystems, in addition to the electrical and instrumentation subsystems. Hardwares costs also involve quantity, as an independent variable, since the number of production scavenging systems is not firm. For storable propellants, since the tankage volume of the oxidizer and fuel are equal, the hardware production costs for developing these systems are lower than for cryogenic propellants.
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staller, G.E.; Hamilton, I.D.; Aker, M.F.
1978-02-01
A single-unit electron beam accelerator was designed, fabricated, and assembled in Sandia's Technical Area V to conduct magnetically insulated transmission experiments. Results of these experiments will be utilized in the future design of larger, more complex accelerators. This design makes optimum use of existing facilities and equipment. When designing new components, possible future applications were considered as well as compatibility with existing facilities and hardware.
Optimum Repair Level Analysis (ORLA) for the Space Transportation System (STS)
NASA Technical Reports Server (NTRS)
Henry, W. R.
1979-01-01
A repair level analysis method applied to a space shuttle scenario is presented. A determination of the most cost effective level of repair for reparable hardware, the location for the repair, and a system which will accrue minimum total support costs within operational and technical constraints over the system design are defined. The method includes cost equations for comparison of selected costs to completion for assumed repair alternates.
Performance characterization of a Bosch CO sub 2 reduction subsystem
NASA Technical Reports Server (NTRS)
Heppner, D. B.; Hallick, T. M.; Schubert, F. H.
1980-01-01
The performance of Bosch hardware at the subsystem level (up to five-person capacity) in terms of five operating parameters was investigated. The five parameters were: (1) reactor temperature, (2) recycle loop mass flow rate, (3) recycle loop gas composition (percent hydrogen), (4) recycle loop dew point and (5) catalyst density. Experiments were designed and conducted in which the five operating parameters were varied and Bosch performance recorded. A total of 12 carbon collection cartridges provided over approximately 250 hours of operating time. Generally, one cartridge was used for each parameter that was varied. The Bosch hardware was found to perform reliably and reproducibly. No startup, reaction initiation or carbon containment problems were observed. Optimum performance points/ranges were identified for the five parameters investigated. The performance curves agreed with theoretical projections.
Thermal control extravehicular life support system
NASA Technical Reports Server (NTRS)
1975-01-01
The results of a comprehensive study which defined an Extravehicular Life Support System Thermal Control System (TCS) are presented. The design of the prototype hardware and a detail summary of the prototype TCS fabrication and test effort are given. Several heat rejection subsystems, water management subsystems, humidity control subsystems, pressure control schemes and temperature control schemes were evaluated. Alternative integrated TCS systems were studied, and an optimum system was selected based on quantitative weighing of weight, volume, cost, complexity and other factors. The selected subsystem contains a sublimator for heat rejection, bubble expansion tank for water management, a slurper and rotary separator for humidity control, and a pump, a temperature control valve, a gas separator and a vehicle umbilical connector for water transport. The prototype hardware complied with program objectives.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
NASA Technical Reports Server (NTRS)
Dursch, Harry; Bohnhoff-Hlavacek, Gail; Blue, Donald; Hansen, Patricia
1995-01-01
The Long Duration Exposure Facility (LDEF) was retrieved in 1990 after spending 69 months in low-earth-orbit (LEO). A wide variety of mechanical, electrical, thermal, and optical systems, subsystems, and components were flown on LDEF. The Systems Special Investigation Group (Systems SIG) was formed by NASA to investigate the effects of the 69 month exposure on systems related hardware and to coordinate and collate all systems analysis of LDEF hardware. This report is the Systems SIG final report which updates earlier findings and compares LDEF systems findings to results from other retrieved spacecraft hardware such as Hubble Space Telescope. Also included are sections titled (1) Effects of Long Duration Space Exposure on Optical Scatter, (2) Contamination Survey of LDEF, and (3) Degradation of Optical Materials in Space.
NASA Astrophysics Data System (ADS)
Dursch, Harry; Bohnhoff-Hlavacek, Gail; Blue, Donald; Hansen, Patricia
1995-09-01
The Long Duration Exposure Facility (LDEF) was retrieved in 1990 after spending 69 months in low-earth-orbit (LEO). A wide variety of mechanical, electrical, thermal, and optical systems, subsystems, and components were flown on LDEF. The Systems Special Investigation Group (Systems SIG) was formed by NASA to investigate the effects of the 69 month exposure on systems related hardware and to coordinate and collate all systems analysis of LDEF hardware. This report is the Systems SIG final report which updates earlier findings and compares LDEF systems findings to results from other retrieved spacecraft hardware such as Hubble Space Telescope. Also included are sections titled (1) Effects of Long Duration Space Exposure on Optical Scatter, (2) Contamination Survey of LDEF, and (3) Degradation of Optical Materials in Space.
Vacuum Technology Considerations For Mass Metrology
Abbott, Patrick J.; Jabour, Zeina J.
2011-01-01
Vacuum weighing of mass artifacts eliminates the necessity of air buoyancy correction and its contribution to the measurement uncertainty. Vacuum weighing is also an important process in the experiments currently underway for the redefinition of the SI mass unit, the kilogram. Creating the optimum vacuum environment for mass metrology requires careful design and selection of construction materials, plumbing components, pumping, and pressure gauging technologies. We review the vacuum technology1 required for mass metrology and suggest procedures and hardware for successful and reproducible operation. PMID:26989593
Integrated orbital servicing study follow-on. Volume 2: Technical analysis and system design
NASA Technical Reports Server (NTRS)
1978-01-01
In-orbit service functional and physical requirements to support both low and high Earth orbit servicing/maintenance operations were defined, an optimum servicing system configuration was developed and mockups and early prototype hardware were fabricated to demonstrate and validate the concepts selected. Significant issues addressed include criteria for concept selection; representative mission equipment and approaches to their design for serviceability; significant serviceable spacecraft design aspects; servicer mechanism operation in one-g; approaches for the demonstration/simulation; and service mechanism structure design approach.
Small-Bolt Torque-Tension Tester
NASA Technical Reports Server (NTRS)
Posey, Alan J.
2009-01-01
The device described here measures the torque-tension relationship for fasteners as small as #0. The small-bolt tester consists of a plate of high-strength steel into which three miniature load cells are recessed. The depth of the recess is sized so that the three load cells can be shimmed, the optimum height depending upon the test hardware. The three miniature load cells are arranged in an equilateral triangular configuration with the test bolt aligned with the centroid of the three. This is a kinematic arrangement.
NASA Astrophysics Data System (ADS)
Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis
1997-11-01
Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.
Advanced composite elevator for Boeing 727 aircraft, volume 2
NASA Technical Reports Server (NTRS)
Chovil, D. V.; Grant, W. D.; Jamison, E. S.; Syder, H.; Desper, O. E.; Harvey, S. T.; Mccarty, J. E.
1980-01-01
Preliminary design activity consisted of developing and analyzing alternate design concepts and selecting the optimum elevator configuration. This included trade studies in which durability, inspectability, producibility, repairability, and customer acceptance were evaluated. Preliminary development efforts consisted of evaluating and selecting material, identifying ancillary structural development test requirements, and defining full scale ground and flight test requirements necessary to obtain Federal Aviation Administration (FAA) certification. After selection of the optimum elevator configuration, detail design was begun and included basic configuration design improvements resulting from manufacturing verification hardware, the ancillary test program, weight analysis, and structural analysis. Detail and assembly tools were designed and fabricated to support a full-scope production program, rather than a limited run. The producibility development programs were used to verify tooling approaches, fabrication processes, and inspection methods for the production mode. Quality parts were readily fabricated and assembled with a minimum rejection rate, using prior inspection methods.
An efficient and practical approach to obtain a better optimum solution for structural optimization
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu; Huang, Jyun-Hao
2013-08-01
For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.
The problem of the driverless vehicle specified path stability control
NASA Astrophysics Data System (ADS)
Buznikov, S. E.; Endachev, D. V.; Elkin, D. S.; Strukov, V. O.
2018-02-01
Currently the effort of many leading foreign companies is focused on creation of driverless transport for transportation of cargo and passengers. Among many practical problems arising while creating driverless vehicles, the problem of the specified path stability control occupies a central place. The purpose of this paper is formalization of the problem in question in terms of the quadratic functional of the control quality, the comparative analysis of the possible solutions and justification of the choice of the optimum technical solution. As square value of the integral of the deviation from the specified path is proposed as the quadratic functional of the control quality. For generation of the set of software and hardware solution variants the Zwicky “morphological box” method is used within the hardware and software environments. The heading control algorithms use the wheel steering angle data and the deviation from the lane centerline (specified path) calculated based on the navigation data and the data from the video system. Where the video system does not detect the road marking, the control is carried out based on the wheel navigation system data and where recognizable road marking exits - based on to the video system data. The analysis of the test results allows making the conclusion that the application of the combined navigation system algorithms that provide quasi-optimum solution of the problem while meeting the strict functional limits for the technical and economic indicators of the driverless vehicle control system under development is effective.
Optimum SNR data compression in hardware using an Eigencoil array.
King, Scott B; Varosi, Steve M; Duensing, G Randy
2010-05-01
With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.
Improved orbiter waste collection system study
NASA Technical Reports Server (NTRS)
Bastin, P. H.
1984-01-01
Design concepts for improved fecal waste collection both on the space shuttle orbiter and as a precursor for the space station are discussed. Inflight usage problems associated with the existing orbiter waste collection subsystem are considered. A basis was sought for the selection of an optimum waste collection system concept which may ultimately result in the development of an orbiter flight test article for concept verification and subsequent production of new flight hardware. Two concepts were selected for orbiter and are shown in detail. Additionally, one concept selected for application to the space station is presented.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1966-01-01
The January issue of Hi-Tension News provides a detailed description of the advanced surge test facilities and procedures in daily operation at the OB High Voltage Laboratory in Barberton, Ohio. Technical competences achieved in this laboratory contribute to the essential factors of design confirmation to basic studies of ehv insulation systems, conductor and hardware performance, and optimum tower construction. Known throughout the industry for authenticity of its full scale, all weather outdoor testing, OB's High Voltage Laboratory is a full-fledged participant in the NEMA-sponsored program to make testing facilities available on a cooperative basis.
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
Bagrosky, Brian M; Hayes, Kari L; Koo, Phillip J; Fenton, Laura Z
2013-08-01
Evaluation of the child with spinal fusion hardware and concern for infection is challenging because of hardware artifact with standard imaging (CT and MRI) and difficult physical examination. Studies using (18)F-FDG PET/CT combine the benefit of functional imaging with anatomical localization. To discuss a case series of children and young adults with spinal fusion hardware and clinical concern for hardware infection. These people underwent FDG PET/CT imaging to determine the site of infection. We performed a retrospective review of whole-body FDG PET/CT scans at a tertiary children's hospital from December 2009 to January 2012 in children and young adults with spinal hardware and suspected hardware infection. The PET/CT scan findings were correlated with pertinent clinical information including laboratory values of inflammatory markers, postoperative notes and pathology results to evaluate the diagnostic accuracy of FDG PET/CT. An exempt status for this retrospective review was approved by the Institution Review Board. Twenty-five FDG PET/CT scans were performed in 20 patients. Spinal fusion hardware infection was confirmed surgically and pathologically in six patients. The most common FDG PET/CT finding in patients with hardware infection was increased FDG uptake in the soft tissue and bone immediately adjacent to the posterior spinal fusion rods at multiple contiguous vertebral levels. Noninfectious hardware complications were diagnosed in ten patients and proved surgically in four. Alternative sources of infection were diagnosed by FDG PET/CT in seven patients (five with pneumonia, one with pyonephrosis and one with superficial wound infections). FDG PET/CT is helpful in evaluation of children and young adults with concern for spinal hardware infection. Noninfectious hardware complications and alternative sources of infection, including pneumonia and pyonephrosis, can be diagnosed. FDG PET/CT should be the first-line cross-sectional imaging study in patients with suspected spinal hardware infection. Because pneumonia was diagnosed as often as spinal hardware infection, initial chest radiography should also be performed.
Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft
NASA Technical Reports Server (NTRS)
Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.
2003-01-01
Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.
Telescope aperture optimization for spacebased coherent wind lidar
NASA Astrophysics Data System (ADS)
Ge, Xian-ying; Zhu, Jun; Cao, Qipeng; Zhang, Yinchao; Yin, Huan; Dong, Xiaojing; Wang, Chao; Zhang, Yongchao; Zhang, Ning
2015-08-01
Many studies have indicated that the optimum measurement approach for winds from space is a pulsed coherent wind lidar, which is an active remote sensing tool with the characteristics that high spatial and temporal resolutions, real-time detection, high mobility, facilitated control and so on. Because of the significant eye safety, efficiency, size, and lifetime advantage, 2μm wavelength solid-state laser lidar systems have attracted much attention in spacebased wind lidar plans. In this paper, the theory of coherent detection is presented and a 2μm wavelength solid-state laser lidar system is introduced, then the ideal aperture is calculated from signal-to-noise(SNR) view at orbit 400km. However, considering real application, even if the lidar hardware is perfectly aligned, the directional jitter of laser beam, the attitude change of the lidar in the long round trip time of the light from the atmosphere and other factors can bring misalignment angle. So the influence of misalignment angle is considered and calculated, and the optimum telescope diameter(0.45m) is obtained as the misalignment angle is 4 μrad. By the analysis of the optimum aperture required for spacebased coherent wind lidar system, we try to present the design guidance for the telescope.
Factors that determine the optimum dose for sub-20nm resist systems: DUV, EUV, and e-beam options
NASA Astrophysics Data System (ADS)
Preil, Moshe
2012-03-01
As EUV and e-beam direct write (EBDW) technologies move closer to insertion into pilot production, questions regarding cost effectiveness take on increasing importance. One of the most critical questions is determining the optimum dose which balances the requirements for cost-effective throughput vs. imaging performance. To date most of the dose requirements have been dictated by the hardware side of the industry. The exposure tool manufacturers have a vested interest in specifying the fastest resists possible in order to maximize the throughput even if it comes at the expense of optimum resist performance. This is especially true for both EUV and EBDW where source power is severely limited. We will explore the cost-benefit tradeoffs which drive the equipment side of the industry, and show how these considerations lead to the current throughput and dose requirements for volume production tools. We will then show how the resulting low doses may lead to shot noise problems and a resulting penalty in resist performance. By comparison to the history of 248 nm DUV resist development we will illustrate how setting unrealistic initial targets for resist dose may lead to unacceptable tradeoffs in resist performance and subsequently long delays in the development of production worthy resists.
Digital receiver study and implementation
NASA Technical Reports Server (NTRS)
Fogle, D. A.; Lee, G. M.; Massey, J. C.
1972-01-01
Computer software was developed which makes it possible to use any general purpose computer with A/D conversion capability as a PSK receiver for low data rate telemetry processing. Carrier tracking, bit synchronization, and matched filter detection are all performed digitally. To aid in the implementation of optimum computer processors, a study of general digital processing techniques was performed which emphasized various techniques for digitizing general analog systems. In particular, the phase-locked loop was extensively analyzed as a typical non-linear communication element. Bayesian estimation techniques for PSK demodulation were studied. A hardware implementation of the digital Costas loop was developed.
Parametric evaluation of ball milling of SiC in water
NASA Technical Reports Server (NTRS)
Kiser, J. D.; Herbell, T. P.; Freedman, M. R.
1985-01-01
A statistically designed experiment was conducted to determine optimum conditions for ball milling alpha-SiC in water. The influence of pH adjustment, volume percent solids loading, and mill rotational speed on grinding effectiveness was examined. An equation defining the effect of those milling variables on specific surface area was obtained. The volume percent solids loading of the slurry had the greatest influence on the grinding effectiveness in terms of increase in specific surface area. As grinding effectiveness improved, mill and media wear also increased. Contamination was minimized by use of sintered alpha-SiC milling hardware.
NASA Technical Reports Server (NTRS)
1975-01-01
A shuttle EVLSS Thermal Control System (TCS) is defined. Thirteen heat rejection subsystems, thirteen water management subsystems, nine humidity control subsystems, three pressure control schemes and five temperature control schemes are evaluated. Sixteen integrated TCS systems are studied, and an optimum system is selected based on quantitative weighting of weight, volume, cost, complexity and other factors. The selected sybsystem contains a sublimator for heat rejection, a bubble expansion tank for water management, and a slurper and rotary separator for humidity control. Design of the selected subsystem prototype hardware is presented.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Multi-phase SPH modelling of violent hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.
2015-11-01
This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreyer, J
2007-09-18
During my internship at Lawrence Livermore National Laboratory I worked with microcalorimeter gamma-ray and fast-neutron detectors based on superconducting Transition Edge Sensors (TESs). These instruments are being developed for fundamental science and nuclear non-proliferation applications because of their extremely high energy resolution; however, this comes at the expense of a small pixel size and slow decay times. The small pixel sizes are being addressed by developing detector arrays while the low count rate is being addressed by developing Digital Signal Processors (DSPs) that allow higher throughput than traditional pulse processing algorithms. Traditionally, low-temperature microcalorimeter pulses have been processed off-line withmore » optimum filtering routines based on the measured spectral characteristics of the signal and the noise. These optimum filters rely on the spectral content of the signal being identical for all events, and therefore require capturing the entire pulse signal without pile-up. In contrast, the DSP algorithm being developed is based on differences in signal levels before and after a trigger event, and therefore does not require the waveform to fully decay, or even the signal level to be close to the base line. The readout system allows for real time data acquisition and analysis at count rates exceeding 100 Hz for pulses with several {approx}ms decay times with minimal loss of energy resolution. Originally developed for gamma-ray analysis with HPGe detectors we have modified the hardware and firmware of the system to accommodate the slower TES signals and optimized the parameters of the filtering algorithm to maximize either resolution or throughput. The following presents an overview of the digital signal processing hardware and discusses the results of characterization measurements made to determine the systems performance.« less
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
Towards composition of verified hardware devices
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, G. C.
1991-01-01
Computers are being used where no affordable level of testing is adequate. Safety and life critical systems must find a replacement for exhaustive testing to guarantee their correctness. Through a mathematical proof, hardware verification research has focused on device verification and has largely ignored system composition verification. To address these deficiencies, we examine how the current hardware verification methodology can be extended to verify complete systems.
Preliminary development of digital signal processing in microwave radiometers
NASA Technical Reports Server (NTRS)
Stanley, W. D.
1980-01-01
Topics covered involve a number of closely related tasks including: the development of several control loop and dynamic noise model computer programs for simulating microwave radiometer measurements; computer modeling of an existing stepped frequency radiometer in an effort to determine its optimum operational characteristics; investigation of the classical second order analog control loop to determine its ability to reduce the estimation error in a microwave radiometer; investigation of several digital signal processing unit designs; initiation of efforts to develop required hardware and software for implementation of the digital signal processing unit; and investigation of the general characteristics and peculiarities of digital processing noiselike microwave radiometer signals.
MDO can help resolve the designer's dilemma. [multidisciplinary design optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Tulinius, Jan R.
1991-01-01
Multidisciplinary design optimization (MDO) is presented as a rapidly growing body of methods, algorithms, and techniques that will provide a quantum jump in the effectiveness and efficiency of the quantitative side of design, and will turn that side into an environment in which the qualitative side can thrive. MDO borrows from CAD/CAM for graphic visualization of geometrical and numerical data, data base technology, and in computer software and hardware. Expected benefits from this methodology are a rational, mathematically consistent approach to hypersonic aircraft designs, designs pushed closer to the optimum, and a design process either shortened or leaving time available for different concepts to be explored.
Investigations to improve carbon dioxide control with amine and molecular sieve type sorbers
NASA Technical Reports Server (NTRS)
Bertrand, J. F.; Brose, H. F.; Kester, F. L.; Lunde, P. J.
1972-01-01
The optimization trends and operating parameters of an integral molecular sieve bed heat exchanger were investigated. The optimum combination of substrate and coating for the HS-B porous polymer was determined based on the CO2 dynamic capacity in the presence of water vapor. Full size HS-B canister performance was evaluated. An Amine CO2 Concentrator utilizing IR-45 sorber material and available Manned Orbiting Laboratory hardware was designed, fabricated and tested for use as an experiment in the NASA 90-day space simulator test of 1970. It supported four men in the simulator for 71 days out of the 90-day test duration.
Gritti, Fabrice; McDonald, Thomas; Gilar, Martin
2015-11-13
The impact of the column hardware volume (≃ 1.7 μL) on the optimum reduced plate heights of a series of short 2.1 mm × 50 mm columns (hold-up volume ≃ 80-90 μL) packed with 1.8 μm HSS-T3, 1.7 μm BEH-C18, 1.7 μm CSH-C18, 1.6 μm CORTECS-C18+, and 1.7 μm BEH-C4 particles was investigated. A rapid and non-invasive method based on the reduction of the system dispersion (to only 0.15 μL(2)) of an I-class Acquity system and on the corrected plate heights (for system dispersion) of five weakly retained n-alkanophenones in RPLC was proposed. Evidence for sample dispersion through the column hardware volume was also revealed from the experimental plot of the peak capacities for smooth linear gradients versus the corrected efficiency of a weakly retained alkanophenone (isocratic runs). The plot is built for a constant gradient steepness irrespective of the applied flow rates (0.01-0.30 mL/min) and column lengths (2, 3, 5, and 10 cm). The volume variance caused by column endfittings and frits was estimated in between 0.1 and 0.7 μL(2) depending on the applied flow rate. After correction for system and hardware dispersion, the minimum reduced plate heights of short (5 cm) and narrow-bore (2.1mm i.d.) beds packed with sub-2 μm fully and superficially porous particles were found close to 1.5 and 0.7, respectively, instead of the classical h values of 2.0 and 1.4 for the whole column assembly. Copyright © 2015 Elsevier B.V. All rights reserved.
The role of atomic absorption spectrometry in geochemical exploration
Viets, J.G.; O'Leary, R. M.
1992-01-01
In this paper we briefly describe the principles of atomic absorption spectrometry (AAS) and the basic hardware components necessary to make measurements of analyte concentrations. Then we discuss a variety of methods that have been developed for the introduction of analyte atoms into the light path of the spectrophotometer. This section deals with sample digestion, elimination of interferences, and optimum production of ground-state atoms, all critical considerations when choosing an AAS method. Other critical considerations are cost, speed, simplicity, precision, and applicability of the method to the wide range of materials sampled in geochemical exploration. We cannot attempt to review all of the AAS methods developed for geological materials but instead will restrict our discussion to some of those appropriate for geochemical exploration. Our background and familiarity are reflected in the methods we discuss, and we have no doubt overlooked many good methods. Our discussion should therefore be considered a starting point in finding the right method for the problem, rather than the end of the search. Finally, we discuss the future of AAS relative to other instrumental techniques and the promising new directions for AAS in geochemical exploration. ?? 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palomino Gallo, Jose Luis; /Rio de Janeiro, CBPF
MINERvA experiment has a highly segmented and high precision neutrino detector able to record events with high statistic (over 13 millions in a four year run). MINERvA uses FERMILAB NuMI beamline. The detector will allow a detailed study of neutrino-nucleon interactions. Moreover, the detector has a target with different materials allowing, for the first time, the study of nuclear effects in neutrino interactions. We present here the work done with the MINERvA reconstruction group that has resulted in: (a) development of new codes to be added to the RecPack package so it can be adapted to the MINERvA detector structure;more » (b) finding optimum values for two of the MegaTracker reconstruction package variables: PEcut = 4 (minimum number of photo electrons for a signal to be accepted) and Chi2Cut = 200 (maximum value of {chi}{sup 2} for a track to be accepted); (c) testing of the multi anode photomultiplier tubes used at MINERvA in order to determine the correlation between different channels and for checking the device's dark counts.« less
Searching for Organics, Fossils, and Biology on Mars
NASA Technical Reports Server (NTRS)
McKay, Christopher P.; DeVincenzi, Donald (Technical Monitor)
2001-01-01
One of the goals of Astrobiology is to understand life on a fundamental level. All life on Earth is constructed from the same basic biochemical building blocks consisting of 20 amino acids with left handed symmetry, five nucleotides, a few sugars of right handed symmetry and some lipids. Using the metaphor of computers this is equivalent to saying that all life shares the same hardware. Beyond hardware similarity, it is now known that all life has fundamentally the same software. The genetic code of life is common to all organisms. Some have argued that the "hammer of evolution is heavy" and life anywhere is likely to be composed of identical biochemical and genetic patterns. However, in a system as complex as biochemistry it is likely that there are numerous local optima and the details of the optimum found by evolutionary selection on another world would likely depend on the initial conditions and random developments in the early biological history on that world. To address these fundamental questions in Astrobiology we need a second example of life: a second genesis.
Shuttle S-band communications technical concepts
NASA Technical Reports Server (NTRS)
Seyl, J. W.; Seibert, W. W.; Porter, J. A.; Eggers, D. S.; Novosad, S. W.; Vang, H. A.; Lenett, S. D.; Lewton, W. A.; Pawlowski, J. F.
1985-01-01
Using the S-band communications system, shuttle orbiter can communicate directly with the Earth via the Ground Spaceflight Tracking and Data Network (GSTDN) or via the Tracking and Data Relay Satellite System (TDRSS). The S-band frequencies provide the primary links for direct Earth and TDRSS communications during all launch and entry/landing phases of shuttle missions. On orbit, S-band links are used when TDRSS Ku-band is not available, when conditions require orbiter attitudes unfavorable to Ku-band communications, or when the payload bay doors are closed. the S-band communications functional requirements, the orbiter hardware configuration, and the NASA S-band communications network are described. The requirements and implementation concepts which resulted in techniques for shuttle S-band hardware development discussed include: (1) digital voice delta modulation; (2) convolutional coding/Viterbi decoding; (3) critical modulation index for phase modulation using a Costas loop (phase-shift keying) receiver; (4) optimum digital data modulation parameters for continuous-wave frequency modulation; (5) intermodulation effects of subcarrier ranging and time-division multiplexing data channels; (6) radiofrequency coverage; and (7) despreading techniques under poor signal-to-noise conditions. Channel performance is reviewed.
On the optimum polarizations of incoherently reflected waves
NASA Technical Reports Server (NTRS)
Van Zyl, Jakob J.; Elachi, Charles; Papas, Charles H.
1987-01-01
The Stokes scattering operator is noted to be the most useful characterization of incoherent scattering in radar imaging; the polarization that would yield an optimum amount of power received from the scatterer is obtained by assuming a knowledge of the Stokes scattering operator instead of the 2x2 scattering matrix with complex elements. It is thereby possible to find the optimum polarizations for the case in which the scatterers can only be fully characterized by their Stokes scattering operator, and the case in which the scatterer can be fully characterized by the complex 2x2 scattering matrix. It is shown that the optimum polarizations reported in the literature form the solution for a subset of a more general class of problems, so that six optimum polarizations can exist for incoherent scattering.
Shuttle cryogenic supply system optimization study
NASA Technical Reports Server (NTRS)
1971-01-01
Technical information on different cryogenic supply systems is presented for selecting representative designs. Parametric data and sensitivity studies, and an evaluation of related technology status are included. An integrated mathematical model for hardware program support was developed. The life support system, power generation, and propellant supply are considered. The major study conclusions are the following: Optimum integrated systems tend towards maximizing liquid storage. Vacuum jacketing of tanks is a major effect on integrated systems. Subcritical storage advantages over supercritical storage decrease as the quantity of propellant or reactant decreases. Shuttle duty cycles are not severe. The operational mode has a significant effect on reliability. Components are available for most subsystem applications. Subsystems and components require a minimum amount of technology development.
Program manual for ASTOP, an Arbitrary space trajectory optimization program
NASA Technical Reports Server (NTRS)
Horsewood, J. L.
1974-01-01
The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitledge, T.E.; Malloy, S.C.; Patton, C.J.
This manual was assembled for use as a guide for analyzing the nutrient content of seawater samples collected in the marine coastal zone of the Northeast United States and the Bering Sea. Some modifications (changes in dilution or sample pump tube sizes) may be necessary to achieve optimum measurements in very pronounced oligotrophic, eutrophic or brackish areas. Information is presented under the following section headings: theory and mechanics of automated analysis; continuous flow system description; operation of autoanalyzer system; cookbook of current nutrient methods; automated analyzer and data analysis software; computer interfacing and hardware modifications; and trouble shooting. The threemore » appendixes are entitled: references and additional reading; manifold components and chemicals; and software listings. (JGB)« less
[Study on extraction technology of soyasaponins from residual of bean ware].
Lu, Rumei; Zhang, Yizhen; Bi, Yi
2003-04-01
To find out the optimum extraction technology of soyasaponins from residual of bean ware. The optimum extraction conditions were investigated by the orthogonal design, and the content of soyasaponins was determined by UV-spectro-pho-tometry. The optimum extraction technology was A3B1C1, that is adding 7 times and 6 times amount of 70% alcohol and refluxing for two times and each time for 1.0 h. The selected technology showed higher yield of soyasaponins, good stability and high efficient.
A Subsystem Test Bed for Chinese Spectral Radioheliograph
NASA Astrophysics Data System (ADS)
Zhao, An; Yan, Yihua; Wang, Wei
2014-11-01
The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.
Study of optimum methods of optical communication
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
Optimum methods of optical communication accounting for the effects of the turbulent atmosphere and quantum mechanics, both by the semi-classical method and the full-fledged quantum theoretical model are described. A concerted effort to apply the techniques of communication theory to the novel problems of optical communication by a careful study of realistic models and their statistical descriptions, the finding of appropriate optimum structures and the calculation of their performance and, insofar as possible, comparing them to conventional and other suboptimal systems are discussed. In this unified way the bounds on performance and the structure of optimum communication systems for transmission of information, imaging, tracking, and estimation can be determined for optical channels.
Sensitivity and comparison evaluation of Saturn 5 liquid penetrants
NASA Technical Reports Server (NTRS)
Jones, G. H.
1973-01-01
Results of a sensitivity and comparison evaluation performed on six liquid penetrants that were used on the Saturn 5 vehicle and other space hardware to detect surface discontinuities are described. The relationship between penetrant materials and crack definition capabilities, the optimum penetrant materials evaluation method, and the optimum measurement methods for crack dimensions were investigated. A unique method of precise developer thickness control was envolved, utilizing clear radiographic film and a densitometer. The method of evaluation included five aluminum alloy, 2219-T87, specimens that were heated and then quenched in cold water to produce cracks. The six penetrants were then applied, one at a time, and the crack indications were counted and recorded for each penetrant for comparison purposes. Measurements were made by determining the visual crack indications per linear inch and then sectioning the specimens for a metallographic count of the cracks present. This method provided a numerical approach for assigning a sensitivity index number to the penetrants. Of the six penetrants evaluated, two were not satisfactory (one was not sufficiently sensitive and the other was to sensitive, giving false indications). The other four were satisfactory with approximately the same sensitivity in the range of 78 to 80.5 percent of total cracks detected.
Coupling of Helmholtz resonators to improve acoustic liners for turbofan engines at low frequency
NASA Technical Reports Server (NTRS)
Dean, L. W.
1975-01-01
An analytical and test program was conducted to evaluate means for increasing the effectiveness of low frequency sound absorbing liners for aircraft turbine engines. Three schemes for coupling low frequency absorber elements were considered. These schemes were analytically modeled and their impedance was predicted over a frequency range of 50 to 1,000 Hz. An optimum and two off-optimum designs of the most promising, a parallel coupled scheme, were fabricated and tested in a flow duct facility. Impedance measurements were in good agreement with predicted values and validated the procedure used to transform modeled parameters to hardware designs. Measurements of attenuation for panels of coupled resonators were consistent with predictions based on measured impedance. All coupled resonator panels tested showed an increase in peak attenuation of about 50% and an increase in attenuation bandwidth of one one-third octave band over that measured for an uncoupled panel. These attenuation characteristics equate to about 35% greater reduction in source perceived noise level (PNL), relative to the uncoupled panel, or a reduction in treatment length of about 24% for constant PNL reduction. The increased effectiveness of the coupled resonator concept for attenuation of low frequency broad spectrum noise is demonstrated.
Distortion and regulation characterization of a Mapham inverter
NASA Technical Reports Server (NTRS)
Sundberg, Richard C.; Brush, Andrew S.; Button, Robert M.; Patterson, Alexander G.
1989-01-01
Output-voltage total harmonic distortion (THD) of a 20-kHz, 6-kVA Mapham resonant inverter is characterized as a function of its switching-to-resonant frequency ratio, f(s)/f(r), using the EASY5 Engineering Analysis System. EASY5 circuit simulation results are compared with hardware test results to verify the accuracy of the simulations. The effects of load on the THD versus f(s)/f(r) is investigated for resistive, leading, and lagging power factor load impedances. The effect of the series output capacitor on the Mapham inverter output-voltage distortion and inherent load regulation is characterized under loads of various power factors and magnitudes. An optimum series capacitor value which improves the inherent load regulation to better than 3 percent is identified. The optimum series capacitor value is different from the value predicted from a modeled frequency domain analysis. An explanation is proposed which takes into account the conduction overlap in the inductor pairs during steady-state inverter operation, which decreases the effective inductance of a Mapham inverter. A fault protection and current limit method is discussed which allows the Mapham inverter to operate into a short circuit, even when the inverter resonant circuit becomes overdamped.
Distortion and regulation characterization of a Mapham inverter
NASA Technical Reports Server (NTRS)
Sundberg, Richard C.; Brush, Andrew S.; Button, Robert M.; Patterson, Alexander G.
1989-01-01
Output voltage Total Harmonic Distortion (THD) of a 20kHz, 6kVA Mapham resonant inverter is characterized as a function of its switching-to-resonant frequency ratio, f sub s/f sub r, using the EASY5 engineering analysis system. EASY5 circuit simulation results are compared with hardware test results to verify the accuracy of the simulations. The effects of load on the THD versus f sub s/f sub r ratio is investigated for resistive, leading, and lagging power factor load impedances. The effect of the series output capacitor on the Mapham inverter output voltage distortion and inherent load regulation is characterized under loads of various power factors and magnitudes. An optimum series capacitor value which improves the inherent load regulation to better than 3 percent is identified. The optimum series capacitor value is different than the value predicted from a modeled frequency domain analysis. An explanation is proposed which takes into account the conduction overlap in the inductor pairs during steady-state inverter operation, which decreases the effective inductance of a Mapham inverter. A fault protection and current limit method is discussed which allows the Mapham inverter to operate into a short circuit, even when the inverter resonant circuit becomes overdamped.
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
Cost as a technology driver. [in aerospace R and D
NASA Technical Reports Server (NTRS)
Fitzgerald, P. E., Jr.; Savage, M.
1976-01-01
Cost managment as a guiding factor in optimum development of technology, and proper timing of cost-saving programs in the development of a system or technology with payoffs in development and operational advances are discussed and illustrated. Advances enhancing the performance of hardware or software advances raising productivity or reducing cost, are outlined, with examples drawn from: thermochemical thrust maximization, development of cryogenic storage tanks, improvements in fuel cells for Space Shuttle, design of a spacecraft pyrotechnic initiator, cost cutting by reduction in the number of parts to be joined, and cost cutting by dramatic reductions in circuit component number with small-scale double-diffused integrated circuitry. Program-focused supporting research and technology models are devised to aid judicious timing of cost-conscious research programs.
Vlachogiannis, J G
2003-01-01
Taguchi's technique is a helpful tool to achieve experimental optimization of a large number of decision variables with a small number of off-line experiments. The technique appears to be an ideal tool for improving the performance of X-ray medical radiographic screens under a noise source. Currently there are very many guides available for improving the efficiency of X-ray medical radiographic screens. These guides can be refined using a second-stage parameter optimization. based on Taguchi's technique, selecting the optimum levels of controllable X-ray radiographic screen factors. A real example of the proposed technique is presented giving certain performance criteria. The present research proposes the reinforcement of X-ray radiography by Taguchi's technique as a novel hardware mechanism.
NASA Technical Reports Server (NTRS)
Roman, Monsi C.; Perry, Jay L.; Jan, Darrell L.
2012-01-01
The Advanced Exploration Systems Program's Atmosphere Resource Recovery and Environmental Monitoring (ARREM) project is working to mature optimum atmosphere revitalization and environmental monitoring system architectures. It is the project's objective to enable exploration beyond Lower Earth Orbit (LEO) and improve affordability by focusing on three primary goals: 1) achieving high reliability, 2) reducing dependence on a ground-based logistics resupply model, and 3) maximizing commonality between atmosphere revitalization subsystem components and those needed to support other exploration elements. The ARREM project's strengths include using existing developmental hardware and testing facilities, when possible, and and a well-coordinated effort among the NASA field centers that contributed to past ARS and EMS technology development projects.
LWS design replacement study: Optimum design and tradeoff analysis
NASA Technical Reports Server (NTRS)
1973-01-01
A design for two long-wavelength (LW) focal-plane and cooler assemblies, including associated preamplifiers and post-amplifiers is presented. The focal-planes and associated electronic assemblies are intended as direct replacement hardware to be installed into the existing 24-channel multispectral scanner used with the NASA Earth Observations Aircraft Program. An organization skilled in the art of LWIR systems can fabricate and deliver the two long-wavelength focal-plane assemblies described in this report when provided with the data and drawings developed during the performance of this contract. The concepts developed during the study including the alternative approaches and selection of components are discussed. Modifications to the preliminary design as reported in a preliminary design review meeting have also been included.
Performance limits for exo-clutter Ground Moving Target Indicator (GMTI) radar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2010-09-01
The performance of a Ground Moving Target Indicator (GMTI) radar system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to 'get your arms around' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall GMTI radar system. While the information herein is not new to the literature, its collection into amore » single report hopes to offer some value in reducing the 'seek time'.« less
Solid state switch panel. [determination of optimum transducer type for required switches
NASA Technical Reports Server (NTRS)
Beenfeldt, E.
1973-01-01
An intensive study of various forms of transducers was conducted with application towards hermetically sealing the transducer and all electronics. The results of the study indicated that the Hall effect devices and a LED/phototransistor combination were the most practical for this type of application. Therefore, hardware was developed utilizing a magnet/Hall effect transducer for single action switches and LED/phototransistor transducers for rotary multiposition or potentiometer applications. All electronics could be housed in a hermetically sealed compartment. A number of switches were built and models were hermetically sealed to prove the feasibility of this type of fabrication. One of each type of switch was subjected to temperature cycling, vibration, and EMI tests. The results of these tests are presented.
Compiling quantum circuits to realistic hardware architectures using temporal planners
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy
2018-04-01
To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.
NASA Astrophysics Data System (ADS)
Brereton, Margot Felicity
A series of short engineering exercises and design projects was created to help students learn to apply abstract knowledge to physical experiences with hardware. The exercises involved designing machines from kits of materials and dissecting and analyzing familiar household products. Students worked in teams. During the activities students brought their knowledge of engineering fundamentals to bear. Videotape analysis was used to identify and characterize the ways in which hardware contributed to learning fundamental concepts. Structural and qualitative analyses of videotaped activities were undertaken. Structural analysis involved counting the references to theory and hardware and the extent of interleaving of references in activity. The analysis found that there was much more discussion linking fundamental concepts to hardware in some activities than in others. The analysis showed that the interleaving of references to theory and hardware in activity is observable and quantifiable. Qualitative analysis was used to investigate the dialog linking concepts and hardware. Students were found to advance their designs and their understanding of engineering fundamentals through a negotiation process in which they pitted abstract concepts against hardware behavior. Through this process students sorted out theoretical assumptions and causal relations. In addition they discovered design assumptions, functional connections and physical embodiments of abstract concepts in hardware, developing a repertoire of familiar hardware components and machines. Hardware was found to be integral to learning, affecting the course of inquiry and the dynamics of group interaction. Several case studies are presented to illustrate the processes at work. The research illustrates the importance of working across the boundary between abstractions and experiences with hardware in order to learn engineering and physical sciences. The research findings are: (a) the negotiation process by which students discover fundamental concepts in hardware (and three central causes of negotiation breakdown); (b) a characterization of the ways that material systems contribute to learning activities, (the seven roles of hardware in learning); (c) the characteristics of activities that support discovering fundamental concepts in hardware (plus several engineering exercises); (d) a research methodology to examine how students learn in practice.
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry
Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui
2014-01-01
Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355
Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui
2012-09-18
Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.
A systematic approach for locating optimum sites
Angel Ramos; Isabel Otero
1979-01-01
The basic information collected for landscape planning studies may be given the form of a "s x m" matrix, where s is the number of landscape units and m the number of data gathered for each unit. The problem of finding the optimum location for a given project is translated in the problem of ranking the series of vectors in the matrix which represent landscape...
Performance limits for maritime Inverse Synthetic Aperture Radar (ISAR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2013-11-01
The performance of an Inverse Synthetic Aperture Radar (ISAR) system depends on a variety of factors, many which are interdependent in some manner. In this report we specifically examine ISAR as applied to maritime targets (e.g. ships). It is often difficult to get your arms around the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall ISAR system. While themore » information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the seek time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusakovskii, K. B.; Zmaznov, E. Yu.; Katantsev, S. V.
The experience in the installation of modern digital systems for controlling converter units at the Vyborg converter substation on the basis of advanced microprocessor devices is considered. It is shown that debugging of a control and protection system on mathematical and physical models does not guarantee optimum control of actual converter devices. Examples of advancing the control and protection system are described, the necessity for which has become obvious in tests of actual equipment. Comparison of oscillograms of processes before optimization of the control system and after its optimization and adjustment shows that the digital control system makes it possiblemore » to improve substantially the algorithms of control and protection in the short term and without changing the hardware component.« less
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
Optimum Suction Distribution for Transition Control
NASA Technical Reports Server (NTRS)
Balakumar, P.; Hall, P.
1996-01-01
The optimum suction distribution which gives the longest laminar region for a given total suction is computed. The goal here is to provide the designer with a method to find the best suction distribution subject to some overall constraint applied to the suction. We formulate the problem using the Lagrangian multiplier method with constraints. The resulting non-linear system of equations is solved using the Newton-Raphson technique. The computations are performed for a Blasius boundary layer on a flat-plate and crossflow cases. For the Blasius boundary layer, the optimum suction distribution peaks upstream of the maximum growth rate region and remains flat in the middle before it decreases to zero at the end of the transition point. For the stationary and travelling crossflow instability, the optimum suction peaks upstream of the maximum growth rate region and decreases gradually to zero.
Challenges to Cabin Humidity Removal Presented by Intermittent Condensing Conditions
NASA Technical Reports Server (NTRS)
vonJouanne, Roger G.; Williams, David E.
2007-01-01
On-orbit temperature and humidity control (THC) is more easily accomplished when the THC hardware is either consistently dry (i.e., no humidity control is occurring), or consistently wet. The system is especially challenged when intermittent wet/dry conditions occur. The first six years of on-orbit ISS operations have revealed specific concerns within the THC system, specifically in the condensing heat exchanger and the downstream air/water separator. Failed or degraded hardware has been returned to ground and investigated. This paper presents the investigation findings, and the recommended hardware and procedural revisions to prevent and recover from the effects of intermittent condensing conditions.
Optimizing coherent anti-Stokes Raman scattering by genetic algorithm controlled pulse shaping
NASA Astrophysics Data System (ADS)
Yang, Wenlong; Sokolov, Alexei
2010-10-01
The hybrid coherent anti-Stokes Raman scattering (CARS) has been successful applied to fast chemical sensitive detections. As the development of femto-second pulse shaping techniques, it is of great interest to find the optimum pulse shapes for CARS. The optimum pulse shapes should minimize the non-resonant four wave mixing (NRFWM) background and maximize the CARS signal. A genetic algorithm (GA) is developed to make a heuristic searching for optimized pulse shapes, which give the best signal the background ratio. The GA is shown to be able to rediscover the hybrid CARS scheme and find optimized pulse shapes for customized applications by itself.
Finding optimum airfoil shape to get maximum aerodynamic efficiency for a wind turbine
NASA Astrophysics Data System (ADS)
Sogukpinar, Haci; Bozkurt, Ismail
2017-02-01
In this study, aerodynamic performances of S-series wind turbine airfoil of S 825 are investigated to find optimum angle of attack. Aerodynamic performances calculations are carried out by utilization of a Computational Fluid Dynamics (CFD) method withstand finite capacity approximation by using Reynolds-Averaged-Navier Stokes (RANS) theorem. The lift and pressure coefficients, lift to drag ratio of airfoil S 825 are analyzed with SST turbulence model then obtained results crosscheck with wind tunnel data to verify the precision of computational Fluid Dynamics (CFD) approximation. The comparison indicates that SST turbulence model used in this study can predict aerodynamics properties of wind blade.
A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
2000-01-01
Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.
Environmental Control and Life Support (ECLS) Hardware Commonality for Exploration Vehicles
NASA Technical Reports Server (NTRS)
Carrasquillo, Robyn; Anderson, Molly
2012-01-01
In August 2011, the Environmental Control and Life Support Systems (ECLSS) technical community, along with associated stakeholders, held a workshop to review NASA s plans for Exploration missions and vehicles with two objectives: revisit the Exploration Atmospheres Working Group (EAWG) findings from 2006, and discuss preliminary ECLSS architecture concepts and technology choices for Exploration vehicles, identifying areas for potential common hardware or technologies to be utilized. Key considerations for selection of vehicle design total pressure and percent oxygen include operational concepts for extravehicular activity (EVA) and prebreathe protocols, materials flammability, and controllability within pressure and oxygen ranges. New data for these areas since the 2006 study were presented and discussed, and the community reached consensus on conclusions and recommendations for target design pressures for each Exploration vehicle concept. For the commonality study, the workshop identified many areas of potential commonality across the Exploration vehicles as well as with heritage International Space Station (ISS) and Shuttle hardware. Of the 36 ECLSS functions reviewed, 16 were considered to have strong potential for commonality, 13 were considered to have some potential commonality, and 7 were considered to have limited potential for commonality due to unique requirements or lack of sufficient heritage hardware. These findings, which will be utilized in architecture studies and budget exercises going forward, are presented in detail.
Performance analysis and optimization of power plants with gas turbines
NASA Astrophysics Data System (ADS)
Besharati-Givi, Maryam
The gas turbine is one of the most important applications for power generation. The purpose of this research is performance analysis and optimization of power plants by using different design systems at different operation conditions. In this research, accurate efficiency calculation and finding optimum values of efficiency for design of chiller inlet cooling and blade cooled gas turbine are investigated. This research shows how it is possible to find the optimum design for different operation conditions, like ambient temperature, relative humidity, turbine inlet temperature, and compressor pressure ratio. The simulated designs include the chiller, with varied COP and fogging cooling for a compressor. In addition, the overall thermal efficiency is improved by adding some design systems like reheat and regenerative heating. The other goal of this research focuses on the blade-cooled gas turbine for higher turbine inlet temperature, and consequently, higher efficiency. New film cooling equations, along with changing film cooling effectiveness for optimum cooling air requirement at the first-stage blades, and an internal and trailing edge cooling for the second stage, are innovated for optimal efficiency calculation. This research sets the groundwork for using the optimum value of efficiency calculation, while using inlet cooling and blade cooling designs. In the final step, the designed systems in the gas cycles are combined with a steam cycle for performance improvement.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
2017-03-01
determine the optimum required operational capability of the unmanned aerial vehicles to support Korean rear area operations. We use Map Aware Non ...area operations. Through further experimentations and analyses, we were able to find the optimum characteristics of an improved unmanned aerial...operations. We use Map Aware Non -Uniform Automata, an agent-based simulation software platform for computational experiments. The study models a scenario
Cognitive Code-Division Channelization
2011-04-01
22] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary signature sets,” IEEE Trans...Commun., vol. 51, pp. 48-51, Jan. 2003. [23] C. Ding, M. Golin, and T. Klve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary...receiver pair coexisting with a primary code-division multiple-access ( CDMA ) system. Our objective is to find the optimum transmitting power and code
Pfeil, Thomas; Potjans, Tobias C; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Pfeil, Thomas; Potjans, Tobias C.; Schrader, Sven; Potjans, Wiebke; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2012-01-01
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists. PMID:22822388
Steinhaus, Benjamin; Garcia, Marcelo L; Shen, Amy Q; Angenent, Largus T
2007-03-01
Conventional studies of the optimum growth conditions for methanogens (methane-producing, obligate anaerobic archaea) are typically conducted with serum bottles or bioreactors. The use of microfluidics to culture methanogens allows direct microscopic observations of the time-integrated response of growth. Here, we developed a microbioreactor (microBR) with approximately 1-microl microchannels to study some optimum growth conditions for the methanogen Methanosaeta concilii. The microBR is contained in an anaerobic chamber specifically designed to place it directly onto an inverted light microscope stage while maintaining a N2-CO2 environment. The methanogen was cultured for months inside microchannels of different widths. Channel width was manipulated to create various fluid velocities, allowing the direct study of the behavior and responses of M. concilii to various shear stresses and revealing an optimum shear level of approximately 20 to 35 microPa. Gradients in a single microchannel were then used to find an optimum pH level of 7.6 and an optimum total NH4-N concentration of less than 1,100 mg/liter (<47 mg/liter as free NH3-N) for M. concilii under conditions of the previously determined ideal shear stress and pH and at a temperature of 35 degrees C.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
Stable polyurethane coatings for electronic circuits. NASA tech briefs, fall 1982, volume 7, no. 1
NASA Technical Reports Server (NTRS)
1982-01-01
One of the most severe deficiencies of polyurethanes as engineering materials for electrical applications has been their sensitivity to combined humidity and temperature environments. Gross failure by reversion of urethane connector potting materials has occurred under these conditions. This has resulted in both scrapping of expensive hardware and reduction in reliability in other instances. A basic objective of this study has been to gain a more complete understanding of the mechanisms and interactions of moisture in urethane systems to guide the development of reversion resistant materials for connector potting and conformal coating applications in high humidity environments. Basic polymer studies of molecular weight and distribution, polymer structure, and functionality were carried out to define those areas responsible for hydrolytic instability and to define polymer structural feature conducive to optimum hydrolytic stability.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Dual throat thruster cold flow analysis
NASA Technical Reports Server (NTRS)
Lundgreen, R. B.; Nickerson, G. R.; Obrien, C. J.
1978-01-01
The concept was evaluated with cold flow (nitrogen gas) testing and through analysis for application as a tripropellant engine for single-stage-to-orbit type missions. Three modes of operation were tested and analyzed: (1) Mode 1 Series Burn, (2) Mode 1 Parallel Burn, and (3) Mode 2. Primary emphasis was placed on the Mode 2 plume attachment aerodynamics and performance. The conclusions from the test data analysis are as follows: (1) the concept is aerodynamically feasible, (2) the performance loss is as low as 0.5 percent, (3) the loss is minimized by an optimum nozzle spacing corresponding to an AF-ATS ratio of about 1.5 or an Le/Rtp ratio of 3.0 for the dual throat hardware tested, requiring only 4% bleed flow, (4) the Mode 1 and Mode 2 geometry requirements are compatible and pose no significant design problems.
NASA Technical Reports Server (NTRS)
Kovach, L. S.; Zdankiewicz, E. M.
1987-01-01
Vapor compression distillation technology for phase change recovery of potable water from wastewater has evolved as a technically mature approach for use aboard the Space Station. A program to parametrically test an advanced preprototype Vapor Compression Distillation Subsystem (VCDS) was completed during 1985 and 1986. In parallel with parametric testing, a hardware improvement program was initiated to test the feasibility of incorporating several key improvements into the advanced preprototype VCDS following initial parametric tests. Specific areas of improvement included long-life, self-lubricated bearings, a lightweight, highly-efficient compressor, and a long-life magnetic drive. With the exception of the self-lubricated bearings, these improvements are incorporated. The advanced preprototype VCDS was designed to reclaim 95 percent of the available wastewater at a nominal water recovery rate of 1.36 kg/h achieved at a solids concentration of 2.3 percent and 308 K condenser temperature. While this performance was maintained for the initial testing, a 300 percent improvement in water production rate with a corresponding lower specific energy was achieved following incorporation of the improvements. Testing involved the characterization of key VCDS performance factors as a function of recycle loop solids concentration, distillation unit temperature and fluids pump speed. The objective of this effort was to expand the VCDS data base to enable defining optimum performance characteristics for flight hardware development.
Chain-Based Communication in Cylindrical Underwater Wireless Sensor Networks
Javaid, Nadeem; Jafri, Mohsin Raza; Khan, Zahoor Ali; Alrajeh, Nabil; Imran, Muhammad; Vasilakos, Athanasios
2015-01-01
Appropriate network design is very significant for Underwater Wireless Sensor Networks (UWSNs). Application-oriented UWSNs are planned to achieve certain objectives. Therefore, there is always a demand for efficient data routing schemes, which can fulfill certain requirements of application-oriented UWSNs. These networks can be of any shape, i.e., rectangular, cylindrical or square. In this paper, we propose chain-based routing schemes for application-oriented cylindrical networks and also formulate mathematical models to find a global optimum path for data transmission. In the first scheme, we devise four interconnected chains of sensor nodes to perform data communication. In the second scheme, we propose routing scheme in which two chains of sensor nodes are interconnected, whereas in third scheme single-chain based routing is done in cylindrical networks. After finding local optimum paths in separate chains, we find global optimum paths through their interconnection. Moreover, we develop a computational model for the analysis of end-to-end delay. We compare the performance of the above three proposed schemes with that of Power Efficient Gathering System in Sensor Information Systems (PEGASIS) and Congestion adjusted PEGASIS (C-PEGASIS). Simulation results show that our proposed 4-chain based scheme performs better than the other selected schemes in terms of network lifetime, end-to-end delay, path loss, transmission loss, and packet sending rate. PMID:25658394
Study of strong turbulence effects for optical wireless links
NASA Astrophysics Data System (ADS)
Yuksel, Heba; Meric, Hasim; Kunter, Fulya
2012-10-01
Strong turbulence measurements that are taken using real time optical wireless experimental setups are valuable when studying the effects of turbulence regimes on a propagating optical beam. In any kind of FSO system, for us to know the strength of the turbulence thus the refractive index structure constant, is beneficial for having an optimum bandwidth of communication. Even if the FSO Link is placed very well-high-above the ground just to have weak enough turbulence effects, there can be severe atmospheric conditions that can change the turbulence regime. Having a successful theory that will cover all regimes will give us the chance of directly processing the image in existing or using an additional hardware thus deciding on the optimum bandwidth of the communication line at firsthand. For this purpose, Strong Turbulence data has been collected using an outdoor optical wireless setup placed about 85 centimeters above the ground with an acceptable declination and a path length of about 250 meters inducing strong turbulence to the propagating beam. Variations of turbulence strength estimation methods as well as frame image analysis techniques are then been applied to the experimental data in order to study the effects of different parameters on the result. Such strong turbulence data is compared with existing weak and intermediate turbulence data. Aperture Averaging Factor for different turbulence regimes is also investigated.
Development of a shuttle recovery Commercial Materials Processing in Space (CMPS) program
NASA Technical Reports Server (NTRS)
1989-01-01
The work performed has covered the following tasks: update commercial users requirements; assess availability of carriers and facilities; shuttle availability assessment; development of optimum accommodations plan; and payload documentation requirements assessment. The results from the first four tasks are presented. To update commercial user requirements, contacts were made with the JEA and CCDS partners to obtain copies of their most recent official flight requests. From these requests the commercial partners' short and long range plans for flight dates, flight frequency, experiment hardware and carriers was determined. A 34 by 44 inch chart was completed to give a snapshot view of the progress of commercialization in space. Further, an assessment was made of the availability of carriers and facilities. Both existing carriers and those under development were identified for use by the commercial partners. A data base was compiled to show the capabilities of the carriers. A shuttle availability assessment was performed using the primary and secondary shuttle manifests released by NASA. Analysis of the manifest produced a flight-by-flight list of flight opportunities available to commercial users. Using inputs from the first three tasks, an Optimum Accommodations Plan was developed. The Accommodation Plan shows the commercial users manifested by flight, the experiment flown, the carrier used and complete list of commercial users that could not be manifested in each calendar year.
Laser cutting metallic plates using a 2kW direct diode laser source
NASA Astrophysics Data System (ADS)
Fallahi Sichani, E.; Hauschild, D.; Meinschien, J.; Powell, J.; Assunção, E. G.; Blackburn, J.; Khan, A. H.; Kong, C. Y.
2015-07-01
This paper investigates the feasibility of using a 2kW direct diode laser source for producing high-quality cuts in a variety of materials. Cutting trials were performed in a two-stage experimental procedure. The first phase of trials was based on a one-factor-at-a-time change of process parameters aimed at exploring the process window and finding a semi-optimum set of parameters for each material/thickness combination. In the second phase, a full factorial experimental matrix was performed for each material and thickness, as a result of which, the optimum cutting parameters were identified. Characteristic values of the optimum cuts were then measured as per BS EN ISO 9013:2002.
Influence of operating conditions on the air gasification of dry refinery sludge in updraft gasifier
NASA Astrophysics Data System (ADS)
Ahmed, R.; Sinnathambi, C. M.
2013-06-01
In the present work, details of the equilibrium modeling of dry refinery sludge (DRS) are presented using ASPEN PLUS Simulator in updraft gasifier. Due to lack of available information in the open journal on refinery sludge gasification using updraft gasifier, an evaluate for its optimum conditions on gasification is presented in this paper. For this purpose a Taguchi Orthogonal array design, statistical software is applied to find optimum conditions for DRS gasification. The goal is to identify the most significant process variable in DRS gasification conditions. The process variables include; oxidation zone temperature, equivalent ratio, operating pressure will be simulated and examined. Attention was focused on the effect of optimum operating conditions on the gas composition of H2 and CO (desirable) and CO2 (undesirable) in terms of mass fraction. From our results and finding it can be concluded that the syngas (H2 & CO) yield in term of mass fraction favors high oxidation zone temperature and at atmospheric pressure while CO2 acid gas favor at a high level of equivalent ratio as well as air flow rate favoring towards complete combustion.
Megalamanegowdru, Jayachandra; Ankola, Anil V; Vathar, Jagadishchandra; Vishwakarma, Prashanthkumar; Dhanappa, Kirankumar B; Balappanavar, Aswini Y
2012-01-01
To assess and compare the periodontal health status among permanent residents of low, optimum and high fluoride areas in Kolar District, India. A house-to-house survey was conducted in a population consisting of 925 permanent residents aged 35 to 44 years in three villages having different levels of fluoride concentrations in the drinking water. The fluoride concentrations in selected villages were 0.48 ppm (low), 1.03 ppm (optimum) and 3.21 ppm (high). The ion selective electrode method was used to estimate the fluoride concentration in the drinking water. Periodontal status was assessed using the Community Periodontal Index (CPI) and loss of attachment (LOA). Results were analysed using the chi-square test and logistic regression. The chi-square test was used to find the group differences and logistic regression to find association between the variables. The overall prevalence of periodontitis was 72.9%; specifically, prevalences were 95.4%, 76.3% and 45.7% in low, optimum and high fluoride areas, respectively. The number of sextants with shallow or deep pockets decreased (shallow pockets: 525, 438, 217; deep pockets: 183, 81, 34) from low to high fluoride areas (odds ratio: 71.3). The low fluoride area had a 7.9-fold higher risk of periodontitis than the optimum fluoride area and a 30-fold higher risk than the high fluoride area, which was highly significant (χ2 = 53.5, P < 0.0001 and χ2 = 192.8, P < 0.001, respectively). The severity of periodontal disease is inversely associated with the fluoride concentrations in drinking water. This relation can provide an approach to fluoride treatments to reduce the prevalence or incidence of this disease.
NASA Astrophysics Data System (ADS)
Sunarmani; Setyadjit; Ermi, S.
2018-05-01
Ongol-ongol is for food diversification by mixing composite flour of taro, banana and mung bean, then was steamed by hot air. The purpose of this study was to find out the optimum way to produce ‘ongol-ongol’ from composite flour and to know the storage life by prediction method. The research consisted of two stages, namely the determination of the optimum formula of ‘ongol-ongol’ with Design Expert DX 8.1.6 software and the estimation of product shelf life of the optimum formula by ASLT (Accelerated Shelf Life Test) method. The optimum formula of the steamed meal was produced from composite flour and arenga flour with ratio of 50: 50 and flour to water ratio of 1: 1. The proximate content of steamed meal of optimum formula is 36.53% moisture content, ash content of 1,36%, fat content of 14.48%, protein level of 28.5%, and carbohydrate of 44.77% (w/w). Energy Value obtained from 100 g of ‘ongol-ongol’ was 320.8 Kcal. Recommended for steamed meal storage life is 12.54 days at ambient temperature.
NASA Technical Reports Server (NTRS)
Hall, Nancy R.; Stocker, Dennis P.; DeLombard, Richard
2011-01-01
This paper describes two student competition programs that allow student teams to conceive a science or engineering experiment for a microgravity environment. Selected teams design and build their experimental hardware, conduct baseline tests, and ship their experiment to NASA where it is operated in the 2.2 Second Drop Tower. The hardware and acquired data is provided to the teams after the tests are conducted so that the teams can prepare their final reports about their findings.
Optimum Actuator Selection with a Genetic Algorithm for Aircraft Control
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. For example, the desired actuators produce a pure roll moment without at the same time causing much pitch or yaw. For a typical wing, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements and mission constraints. A genetic algorithm has been developed for finding the best placement for four actuators to produce an uncoupled pitch moment. The genetic algorithm has been extended to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control. A simplified, untapered, unswept wing is the model for each application.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
3D Reconfigurable MPSoC for Unmanned Spacecraft Navigation
NASA Astrophysics Data System (ADS)
Dekoulis, George
2016-07-01
This paper describes the design of a new lightweight spacecraft navigation system for unmanned space missions. The system addresses the demands for more efficient autonomous navigation in the near-Earth environment or deep space. The proposed instrumentation is directly suitable for unmanned systems operation and testing of new airborne prototypes for remote sensing applications. The system features a new sensor technology and significant improvements over existing solutions. Fluxgate type sensors have been traditionally used in unmanned defense systems such as target drones, guided missiles, rockets and satellites, however, the guidance sensors' configurations exhibit lower specifications than the presented solution. The current implementation is based on a recently developed material in a reengineered optimum sensor configuration for unprecedented low-power consumption. The new sensor's performance characteristics qualify it for spacecraft navigation applications. A major advantage of the system is the efficiency in redundancy reduction achieved in terms of both hardware and software requirements.
Aircraft vortex marking program
NASA Technical Reports Server (NTRS)
Pompa, M. F.
1979-01-01
A simple, reliable device for identifying atmospheric vortices, principally as generated by in-flight aircraft and with emphasis on the use of nonpolluting aerosols for marking by injection into such vortex (-ices) is presented. The refractive index and droplet size were determined from an analysis of aerosol optical and transport properties as the most significant parameters in effecting vortex optimum light scattering (for visual sighting) and visual persistency of at least 300 sec. The analysis also showed that a steam-ejected tetraethylene glycol aerosol with droplet size near 1 micron and refractive index of approximately 1.45 could be a promising candidate for vortex marking. A marking aerosol was successfully generated with the steam-tetraethylene glycol mixture from breadboard system hardware. A compact 25 lb/f thrust (nominal) H2O2 rocket chamber was the key component of the system which produced the required steam by catalytic decomposition of the supplied H2O2.
III Lead ECG Pulse Measurement Sensor
NASA Astrophysics Data System (ADS)
Thangaraju, S. K.; Munisamy, K.
2015-09-01
Heart rate sensing is very important. Method of measuring heart pulse by using an electrocardiogram (ECG) technique is described. Electrocardiogram is a measurement of the potential difference (the electrical pulse) generated by a cardiac tissue, mainly the heart. This paper also reports the development of a three lead ECG hardware system that would be the basis of developing a more cost efficient, portable and easy to use ECG machine. Einthoven's Three Lead method [1] is used for ECG signal extraction. Using amplifiers such as the instrumentation amplifier AD620BN and the conventional operational amplifier Ua741 that would be used to amplify the ECG signal extracted develop this system. The signal would then be filtered from noise using Butterworth filter techniques to obtain optimum output. Also a right leg guard was implemented as a safety feature to this system. Simulation was carried out for development of the system using P-spice Program.
NASA Astrophysics Data System (ADS)
Datta, Kanan K.; Jensen, Hannes; Majumdar, Suman; Mellema, Garrelt; Iliev, Ilian T.; Mao, Yi; Shapiro, Paul R.; Ahn, Kyungjin
2014-08-01
Measurements of the H I 21-cm power spectra from the reionization epoch will be influenced by the evolution of the signal along the line-of-sight direction of any observed volume. We use numerical as well as seminumerical simulations of reionization in a cubic volume of 607 Mpc across to study this so-called light-cone effect on the H I 21-cm power spectrum. We find that the light-cone effect has the largest impact at two different stages of reionization: one when reionization is ˜20 per cent and other when it is ˜80 per cent completed. We find a factor of ˜4 amplification of the power spectrum at the largest scale available in our simulations. We do not find any significant anisotropy in the 21-cm power spectrum due to the light-cone effect. We argue that for the power spectrum to become anisotropic, the light-cone effect would have to make the ionized bubbles significantly elongated or compressed along the line of sight, which would require extreme reionization scenarios. We also calculate the two-point correlation functions parallel and perpendicular to the line of sight and find them to differ. Finally, we calculate an optimum frequency bandwidth below which the light-cone effect can be neglected when extracting power spectra from observations. We find that if one is willing to accept a 10 per cent error due to the light-cone effect, the optimum frequency bandwidth for k = 0.056 Mpc-1 is ˜7.5 MHz. For k = 0.15 and 0.41 Mpc-1, the optimum bandwidth is ˜11 and ˜16 MHz, respectively.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
Bell, Sarah; Shaw-Dunn, John; Gollee, Henrik; Allan, David B; Fraser, Matthew H; McLean, Alan N
2007-08-01
Patients with tetraplegia often have respiratory complications because of paralysis of the abdominal and intercostal muscles. Functional electrical stimulation (FES) has been used to improve breathing in these patients by applying surface stimulation to the abdominal muscles. We aimed to find the best nerves to stimulate directly to increase tidal volume and make cough more effective. Surface electrodes were placed on a patient's abdominal wall to find the optimum points for surface stimulation. These positions were plotted on a transparent sheet. The abdomino-intercostal nerves were dissected in five male dissecting room cadavers matched for size with the patient. The plastic sheet was then superimposed over each of the dissections to clarify the relationship between optimum surface stimulation points and the underlying nerves. Results show that the optimum surface stimulation points overlie the course of abdomino-intercostal nerves T9, 10, and 11. The success with selecting stimulation points associated with T9, 10, and 11 is probably because of the large mass of abdominal muscle supplied by these nerves. The constant position of the nerves below the ribs makes the intercostal space a possible site for direct stimulation of the abdomino-intercostal nerves.
Evaluation of pressurized water cleaning systems for hardware refurbishment
NASA Technical Reports Server (NTRS)
Dillard, Terry W.; Deweese, Charles D.; Hoppe, David T.; Vickers, John H.; Swenson, Gary J.; Hutchens, Dale E.
1995-01-01
Historically, refurbishment processes for RSRM motor cases and components have employed environmentally harmful materials. Specifically, vapor degreasing processes consume and emit large amounts of ozone depleting compounds. This program evaluates the use of pressurized water cleaning systems as a replacement for the vapor degreasing process. Tests have been conducted to determine if high pressure water washing, without any form of additive cleaner, is a viable candidate for replacing vapor degreasing processes. This paper discusses the findings thus far of Engineering Test Plan - 1168 (ETP-1168), 'Evaluation of Pressurized Water Cleaning Systems for Hardware Refurbishment.'
Long Duration Exposure Facility (LDEF) optical systems SIG summary and database
NASA Astrophysics Data System (ADS)
Bohnhoff-Hlavacek, Gail
1992-09-01
The main objectives of the Long Duration Exposure Facility (LDEF) Optical Systems Special Investigative Group (SIG) Discipline are to develop a database of experimental findings on LDEF optical systems and elements hardware, and provide an optical system overview. Unlike the electrical and mechanical disciplines, the optics effort relies primarily on the testing of hardware at the various principal investigator's laboratories, since minimal testing of optical hardware was done at Boeing. This is because all space-exposed optics hardware are part of other individual experiments. At this time, all optical systems and elements testing by experiment investigator teams is not complete, and in some cases has hardly begun. Most experiment results to date, document observations and measurements that 'show what happened'. Still to come from many principal investigators is a critical analysis to explain 'why it happened' and future design implications. The original optical system related concerns and the lessons learned at a preliminary stage in the Optical Systems Investigations are summarized. The design of the Optical Experiments Database and how to acquire and use the database to review the LDEF results are described.
Long Duration Exposure Facility (LDEF) optical systems SIG summary and database
NASA Technical Reports Server (NTRS)
Bohnhoff-Hlavacek, Gail
1992-01-01
The main objectives of the Long Duration Exposure Facility (LDEF) Optical Systems Special Investigative Group (SIG) Discipline are to develop a database of experimental findings on LDEF optical systems and elements hardware, and provide an optical system overview. Unlike the electrical and mechanical disciplines, the optics effort relies primarily on the testing of hardware at the various principal investigator's laboratories, since minimal testing of optical hardware was done at Boeing. This is because all space-exposed optics hardware are part of other individual experiments. At this time, all optical systems and elements testing by experiment investigator teams is not complete, and in some cases has hardly begun. Most experiment results to date, document observations and measurements that 'show what happened'. Still to come from many principal investigators is a critical analysis to explain 'why it happened' and future design implications. The original optical system related concerns and the lessons learned at a preliminary stage in the Optical Systems Investigations are summarized. The design of the Optical Experiments Database and how to acquire and use the database to review the LDEF results are described.
VIDANA: Data Management System for Nano Satellites
NASA Astrophysics Data System (ADS)
Montenegro, Sergio; Walter, Thomas; Dilger, Erik
2013-08-01
A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.
Cost efficient CFD simulations: Proper selection of domain partitioning strategies
NASA Astrophysics Data System (ADS)
Haddadi, Bahram; Jordan, Christian; Harasek, Michael
2017-10-01
Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.
Abboud, Talal; Bamsey, Matthew; Paul, Anna-Lisa; Graham, Thomas; Braham, Stephen; Noumeir, Rita; Berinstain, Alain; Ferl, Robert
2013-01-01
Higher plants are an integral part of strategies for sustained human presence in space. Space-based greenhouses have the potential to provide closed-loop recycling of oxygen, water and food. Plant monitoring systems with the capacity to remotely observe the condition of crops in real-time within these systems would permit operators to take immediate action to ensure optimum system yield and reliability. One such plant health monitoring technique involves the use of reporter genes driving fluorescent proteins as biological sensors of plant stress. In 2006 an initial prototype green fluorescent protein imager system was deployed at the Arthur Clarke Mars Greenhouse located in the Canadian High Arctic. This prototype demonstrated the advantageous of this biosensor technology and underscored the challenges in collecting and managing telemetric data from exigent environments. We present here the design and deployment of a second prototype imaging system deployed within and connected to the infrastructure of the Arthur Clarke Mars Greenhouse. This is the first imager to run autonomously for one year in the un-crewed greenhouse with command and control conducted through the greenhouse satellite control system. Images were saved locally in high resolution and sent telemetrically in low resolution. Imager hardware is described, including the custom designed LED growth light and fluorescent excitation light boards, filters, data acquisition and control system, and basic sensing and environmental control. Several critical lessons learned related to the hardware of small plant growth payloads are also elaborated. PMID:23486220
2001-01-24
Close-up view of the Binary Colloidal Alloy Test during an experiment run aboard the Russian Mir space station. BCAT is part of an extensive series of experiments plarned to investigate the fundamental properties of colloids so that scientists can make colloids more useful for technological applications. Some of the colloids studied in BCAT are made of two different sized particles (binary colloidal alloys) that are very tiny, uniform plastic spheres. Under the proper conditions, these colloids can arrange themselves in a pattern to form crystals, which may have many unique properties that may form the basis of new classes of light switches, displays, and optical devices that can fuel the evolution of the next generation of computer and communication technologies. This Slow Growth hardware consisted of a 35-mm camera aimed toward a module which contained 10 separate colloid samples. To begin the experiment, one of the astronauts would mix the samples to disperse the colloidal particles. Then the hardware operated autonomously, taking photos of the colloidal samples over a 90-day period. The investigation proved that gravity plays a central role in the formation and stability of these types of colloidal crystal structures. The investigation also helped identify the optimum conditions for the formation of colloidal crystals, which will be used for optimizing future microgravity experiments in the study of colloidal physics. Dr. David Weitz of the University of Pennsylvania and Dr. Peter Pusey of the University of Edinburgh, United Kingdom, are the principal investigators.
Gimenez, Sonia; Roger, Sandra; Baracca, Paolo; Martín-Sacristán, David; Monserrat, Jose F; Braun, Volker; Halbauer, Hardy
2016-09-22
The use of massive multiple-input multiple-output (MIMO) techniques for communication at millimeter-Wave (mmW) frequency bands has become a key enabler to meet the data rate demands of the upcoming fifth generation (5G) cellular systems. In particular, analog and hybrid beamforming solutions are receiving increasing attention as less expensive and more power efficient alternatives to fully digital precoding schemes. Despite their proven good performance in simple setups, their suitability for realistic cellular systems with many interfering base stations and users is still unclear. Furthermore, the performance of massive MIMO beamforming and precoding methods are in practice also affected by practical limitations and hardware constraints. In this sense, this paper assesses the performance of digital precoding and analog beamforming in an urban cellular system with an accurate mmW channel model under both ideal and realistic assumptions. The results show that analog beamforming can reach the performance of fully digital maximum ratio transmission under line of sight conditions and with a sufficient number of parallel radio-frequency (RF) chains, especially when the practical limitations of outdated channel information and per antenna power constraints are considered. This work also shows the impact of the phase shifter errors and combiner losses introduced by real phase shifter and combiner implementations over analog beamforming, where the former ones have minor impact on the performance, while the latter ones determine the optimum number of RF chains to be used in practice.
The use of UNIX in a real-time environment
NASA Technical Reports Server (NTRS)
Luken, R. D.; Simons, P. C.
1986-01-01
This paper describes a project to evaluate the feasibility of using commercial off-the-shelf hardware and the UNIX operating system, to implement a real-time control and monitor system. A functional subset of the Checkout, Control and Monitor System was chosen as the test bed for the project. The project consists of three separate architecture implementations: a local area bus network, a star network, and a central host. The motivation for this project stemmed from the need to find a way to implement real-time systems, without the cost burden of developing and maintaining custom hardware and unique software. This has always been accepted as the only option because of the need to optimize the implementation for performance. However, with the cost/performance of today's hardware, the inefficiencies of high-level languages and portable operating systems can be effectively overcome.
Experiments in dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Berenfeld, A.
1983-01-01
Experimental results are presented on the mixing of a single row of jets with an isothermal mainstream in a straight duct, with flow and geometric variations typical of combustion chambers in gas turbine engines included. It is found that at a constant momentum ratio, variations in the density ratio have only a second-order effect on the profiles. A first-order approximation to the mixing of jets with a variable temperature mainstream can, it is found, be obtained by superimposing the jets-in-an-isothermal-crossflow and mainstream profiles. Another finding is that the flow area convergence, especially injection-wall convergence, significantly improves the mixing. For opposed rows of jets with the orifice cone centerlines in-line, the optimum ratio of orifice spacing to duct height is determined to be 1/2 of the optimum value for single injection at the same momentum ratio. For opposed rows of jets with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is found to be twice the optimum value for single side injection at the same momentum ratio.
Optimum design of structures subject to general periodic loads
NASA Technical Reports Server (NTRS)
Reiss, Robert; Qian, B.
1989-01-01
A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.
Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes
Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig; ...
2018-04-19
In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less
Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig
In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less
NASA Astrophysics Data System (ADS)
Parlak, Zekeriya
2018-05-01
Design concept of microchannel heat exchangers is going to plan with new flow microchannel configuration to reduce the pressure drop and improve heat transfer performance. The study aims to find optimum microchannel design providing the best performance of flow and heat transfer characterization in a heat sink. Therefore, three different types of microchannels in which water is used, straight, wavy and zigzag have been studied. The optimization operation has been performed to find optimum geometry with ANSYS's Response Surface Optimization Tool. Primarily, CFD analysis has been performed by parameterizing a wavy microchannel geometry. Optimum wavy microchannel design has been obtained by the response surface created for the range of velocity from 0.5 to 5, the range of amplitude from 0.06 to 0.3, the range of microchannel height from 0.1 to 0.2, the range of microchannel width from 0.1 to 0.2 and range of sinusoidal wave length from 0.25 to 2.0. All simulations have been performed in the laminar regime for Reynolds number ranging from 100 to 900. Results showed that the Reynolds number range corresponding to the industrial pressure drop limits is between 100 and 400. Nu values obtained in this range for optimum wavy geometry were found at a rate of 10% higher than those of the zigzag channel and 40% higher than those of the straight channels. In addition, when the pressure values of the straight channel did not exceed 10 kPa, the inlet pressure data calculated for zigzag and wavy channel data almost coincided with each other.
LDEF systems special investigation group overview
NASA Technical Reports Server (NTRS)
Mason, Jim; Dursch, Harry
1995-01-01
The Systems Special Investigation Group (Systems SIG), formed by the LDEF Project Office to perform post-flight analysis of LDEF systems hardware, was chartered to investigate the effects of the extended LDEF mission on both satellite and experiment systems and to coordinate and integrate all systems related analyses performed during post-flight investigations. The Systems SIG published a summary report in April, 1992 titled 'Analysis of Systems Hardware Flown on LDEF - Results of the Systems Special Investigation Group' that described findings through the end of 1991. The Systems SIG, unfunded in FY 92 and FY93, has been funded in FY 94 to update this report with all new systems related findings. This paper provides a brief summary of the highlights of earlier Systems SIG accomplishments and describes tasks the Systems SIG has been funded to accomplish in FY 94.
Training Scalable Restricted Boltzmann Machines Using a Quantum Annealer
NASA Astrophysics Data System (ADS)
Kumar, V.; Bass, G.; Dulny, J., III
2016-12-01
Machine learning and the optimization involved therein is of critical importance for commercial and military applications. Due to the computational complexity of many-variable optimization, the conventional approach is to employ meta-heuristic techniques to find suboptimal solutions. Quantum Annealing (QA) hardware offers a completely novel approach with the potential to obtain significantly better solutions with large speed-ups compared to traditional computing. In this presentation, we describe our development of new machine learning algorithms tailored for QA hardware. We are training restricted Boltzmann machines (RBMs) using QA hardware on large, high-dimensional commercial datasets. Traditional optimization heuristics such as contrastive divergence and other closely related techniques are slow to converge, especially on large datasets. Recent studies have indicated that QA hardware when used as a sampler provides better training performance compared to conventional approaches. Most of these studies have been limited to moderately-sized datasets due to the hardware restrictions imposed by exisitng QA devices, which make it difficult to solve real-world problems at scale. In this work we develop novel strategies to circumvent this issue. We discuss scale-up techniques such as enhanced embedding and partitioned RBMs which allow large commercial datasets to be learned using QA hardware. We present our initial results obtained by training an RBM as an autoencoder on an image dataset. The results obtained so far indicate that the convergence rates can be improved significantly by increasing RBM network connectivity. These ideas can be readily applied to generalized Boltzmann machines and we are currently investigating this in an ongoing project.
From Here to Technology. How To Fund Hardware, Software, and More.
ERIC Educational Resources Information Center
Hunter, Barbara M.
Faced with shrinking state and local tax support and an increased demand for K-12 educational reform, school leaders must use creative means to find money to improve and deliver instruction and services to their schools. This handbook describes innovative strategies that school leaders have used to find scarce dollars for purchasing educational…
Effect of double air injection on performance characteristics of centrifugal compressor
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Takano, Mizuki; Tsujita, Hoshio
2015-02-01
In the operation of a centrifugal compressor of turbocharger, instability phenomena such as rotating stall and surge are induced at a lower flow rate close to the maximum pressure ratio. In this study, for the suppression of surge phenomenon resulting in the extension of the stable operating range of centrifugal compressor to lower flow rate, the compressed air at the compressor exit was re-circulated and injected into the impeller inlet by using the double injection nozzle system. The experiments were performed to find out the optimum circumferential position of the second nozzle relative to the fixed first one and the optimum inner diameter of the injection nozzles, which are able to most effectively reduce the flow rate of surge inception. Moreover, in order to examine the universality of these optimum values, the experiments were carried out for two types of compressors.
NASA Astrophysics Data System (ADS)
Saberi, Maliheh; Ashkarran, Ali Akbar
Tungsten-doped TiO2 gas sensors were successfully synthesized using sol-gel process and spin coating technique. The fabricated sensor was characterized by field emission scanning electron microscopy (FE-SEM), ultraviolet visible (UV-Vis) spectroscopy, transmission electron microscopy (TEM), X-Ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy. Gas sensing properties of pristine and tungsten-doped TiO2 nanolayers (NLs) were probed by detection of CO2 gas. A series of experiments were conducted in order to find the optimum operating temperature of the prepared sensors and also the optimum value of tungsten concentration in TiO2 matrix. It was found that introducing tungsten into the TiO2 matrix enhanced the gas sensing performance. The maximum response was found to be (1.37) for 0.001g tungsten-doped TiO2 NLs at 200∘C as an optimum operating temperature.
Attempting nanolocalization of all-optical switching through nano-holes in an Al-mask
NASA Astrophysics Data System (ADS)
Savoini, M.; Reid, A. H.; Wang, T.; Graves, C. E.; Hoffmann, M. C.; Liu, T.-M.; Tsukamoto, A.; Stöhr, J.; Dürr, H. A.; Kirilyuk, A.; Kimel, A. V.; Rasing, T.
2014-08-01
We investigate the light-induced magnetization reversal in samples of rare-earth transition metal alloys, where we aim to spatially confine the switched region at the nanoscale, with the help of nano-holes in an Al-mask covering the sample. First of all, an optimum multilayer structure is designed for the optimum absorption of the incident light. Next, using finite difference time domain simulations we investigate light penetration through nano-holes of different diameter. We find that the holes of 200 nm diameter combine an optimum transmittance with a localization better than λ/4. Further, we have manufactured samples with the help of focused ion beam milling of Al-capped TbCoFe layers. Finally, employing magnetization-sensitive X-ray holography techniques, we have investigated the magnetization reversal with extremely high resolution. The results show severe processing effects on the switching characteristics of the magnetic layers.
A methodology for selecting optimum organizations for space communities
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1978-01-01
This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.
NASA Astrophysics Data System (ADS)
Ezhilarasu, P. Megavarna; Inbavalli, M.; Murali, K.; Thamilmaran, K.
2018-07-01
In this paper, we report the dynamical transitions to strange non-chaotic attractors in a quasiperiodically forced state controlled-cellular neural network (SC-CNN)-based MLC circuit via two different mechanisms, namely the Heagy-Hammel route and the gradual fractalisation route. These transitions were observed through numerical simulations and hardware experiments and confirmed using statistical tools, such as maximal Lyapunov exponent spectrum and its variance and singular continuous spectral analysis. We find that there is a remarkable agreement of the results from both numerical simulations as well as from hardware experiments.
Finding the Best Logo for Your Students.
ERIC Educational Resources Information Center
Harvey, Brian
1987-01-01
Variations among different versions of Logo are discussed in terms of special hardware needed, size and speed, bugs and misfeatures, and syntax. LogoWriter and Object Logo are also described. A comparison chart is included. (MNS)
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1975-01-01
An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Milne, Stephen D; Seoudi, Ihab; Al Hamad, Hanadi; Talal, Talal K; Anoop, Anzila A; Allahverdi, Niloofar; Zakaria, Zain; Menzies, Robert; Connolly, Patricia
2016-12-01
Wound moisture is known to be a key parameter to ensure optimum healing conditions in wound care. This study tests the moisture content of wounds in normal practice in order to observe the moisture condition of the wound at the point of dressing change. This study is also the first large-scale observational study that investigates wound moisture status at dressing change. The WoundSense sensor is a commercially available moisture sensor which sits directly on the wound in order to find the moisture status of the wound without disturbing or removing the dressing. The results show that of the 588 dressing changes recorded, 44·9% were made when the moisture reading was in the optimum moisture zone. Of the 30 patients recruited for this study, 11 patients had an optimum moisture reading for at least 50% of the measurements before dressing change. These results suggest that a large number of unnecessary dressing changes are being made. This is a significant finding of the study as it suggests that the protocols currently followed can be modified to allow fewer dressing changes and less disturbance of the healing wound bed. © 2015 The Authors. International Wound Journal published by Medicalhelplines.com Inc and John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Caruso, Salvadore V.; Cox, Jack A.; McGee, Kathleen A.
1998-01-01
Marshall Space Flight Center (MSFC) of the National Aeronautics and Space Administration performs many research and development programs that require hardware and assemblies to be cleaned to levels that are compatible with fuels and oxidizers (liquid oxygen, solid propellants, etc.). Also, MSFC is responsible for developing large telescope satellites which require a variety of optical systems to be cleaned. A precision cleaning shop is operated within MSFC by the Fabrication Services Division of the Materials & Processes Laboratory. Verification of cleanliness is performed for all precision cleaned articles in the Environmental and Analytical Chemistry Branch. Since the Montreal Protocol was instituted, MSFC had to find substitutes for many materials that have been in use for many years, including cleaning agents and organic solvents. As MSFC is a research center, there is a great variety of hardware that is processed in the Precision Cleaning Shop. This entails the use of many different chemicals and solvents, depending on the nature and configuration of the hardware and softgoods being cleaned. A review of the manufacturing cleaning and verification processes, cleaning materials and solvents used at MSFC and changes that resulted from the Montreal Protocol will be presented.
NASA Technical Reports Server (NTRS)
Caruso, Salvadore V.
1999-01-01
Marshall Space Flight Center (MSFC) of the National Aeronautics and Space Administration (NASA) performs many research and development programs that require hardware and assemblies to be cleaned to levels that are compatible with fuels and oxidizers (liquid oxygen, solid propellants, etc.). Also, the Center is responsible for developing large telescope satellites which requires a variety of optical systems to be cleaned. A precision cleaning shop is operated with-in MSFC by the Fabrication Services Division of the Materials & Processes Division. Verification of cleanliness is performed for all precision cleaned articles in the Analytical Chemistry Branch. Since the Montreal Protocol was instituted, MSFC had to find substitutes for many materials that has been in use for many years, including cleaning agents and organic solvents. As MSFC is a research Center, there is a great variety of hardware that is processed in the Precision Cleaning Shop. This entails the use of many different chemicals and solvents, depending on the nature and configuration of the hardware and softgoods being cleaned. A review of the manufacturing cleaning and verification processes, cleaning materials and solvents used at MSFC and changes that resulted from the Montreal Protocol will be presented.
On the Complexity of the Metric TSP under Stability Considerations
NASA Astrophysics Data System (ADS)
Mihalák, Matúš; Schöngens, Marcel; Šrámek, Rastislav; Widmayer, Peter
We consider the metric Traveling Salesman Problem (Δ-TSP for short) and study how stability (as defined by Bilu and Linial [3]) influences the complexity of the problem. On an intuitive level, an instance of Δ-TSP is γ-stable (γ> 1), if there is a unique optimum Hamiltonian tour and any perturbation of arbitrary edge weights by at most γ does not change the edge set of the optimal solution (i.e., there is a significant gap between the optimum tour and all other tours). We show that for γ ≥ 1.8 a simple greedy algorithm (resembling Prim's algorithm for constructing a minimum spanning tree) computes the optimum Hamiltonian tour for every γ-stable instance of the Δ-TSP, whereas a simple local search algorithm can fail to find the optimum even if γ is arbitrary. We further show that there are γ-stable instances of Δ-TSP for every 1 < γ< 2. These results provide a different view on the hardness of the Δ-TSP and give rise to a new class of problem instances which are substantially easier to solve than instances of the general Δ-TSP.
Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R
2007-09-01
A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.
FISHER'S GEOMETRIC MODEL WITH A MOVING OPTIMUM
Matuszewski, Sebastian; Hermisson, Joachim; Kopp, Michael
2014-01-01
Fisher's geometric model has been widely used to study the effects of pleiotropy and organismic complexity on phenotypic adaptation. Here, we study a version of Fisher's model in which a population adapts to a gradually moving optimum. Key parameters are the rate of environmental change, the dimensionality of phenotype space, and the patterns of mutational and selectional correlations. We focus on the distribution of adaptive substitutions, that is, the multivariate distribution of the phenotypic effects of fixed beneficial mutations. Our main results are based on an “adaptive-walk approximation,” which is checked against individual-based simulations. We find that (1) the distribution of adaptive substitutions is strongly affected by the ecological dynamics and largely depends on a single composite parameter γ, which scales the rate of environmental change by the “adaptive potential” of the population; (2) the distribution of adaptive substitution reflects the shape of the fitness landscape if the environment changes slowly, whereas it mirrors the distribution of new mutations if the environment changes fast; (3) in contrast to classical models of adaptation assuming a constant optimum, with a moving optimum, more complex organisms evolve via larger adaptive steps. PMID:24898080
Jeong, Yeseul; Jang, Nulee; Yasin, Muhammad; Park, Shinyoung; Chang, In Seop
2016-02-01
This study determines and compares the intrinsic kinetic parameters (Ks and Ki) of selected Thermococcus onnurineus NA1 strains (wild-type (WT), and mutants MC01, MC02, and WTC156T) using the substrate inhibition model. Ks and Ki values were used to find the optimum dissolved CO (CL) conditions inside the reactor. The results showed that in terms of the maximum specific CO consumption rates (qCO(max)) of WT, MC01, MC02, and WTC156T the optimum activities can be achieved by maintaining the CL levels at 0.56mM, 0.52mM, 0.58mM, and 0.75mM, respectively. The qCO(max) value of WTC156T at 0.75mM was found to be 1.5-fold higher than for the WT strain, confirming its superiority. Kinetic modeling was then used to predict the conditions required to maintain the optimum CL levels and high cell concentrations in the reactor, based on the kinetic parameters of the WTC156T strain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wheelchair and Occupant Restraint on School Buses
DOT National Transportation Integrated Search
1990-05-01
This report presents the findings of a literature survey, wheelchair hardware survey, wheelchair usage on school buses survey and assessment of current worldwide standards to address securement of wheelchairs on school buses and other modes of public...
SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel
NASA Astrophysics Data System (ADS)
Nolle, Lars
The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.
2014-01-01
Background Subcutaneous veins localization is usually performed manually by medical staff to find suitable vein to insert catheter for medication delivery or blood sample function. The rule of thumb is to find large and straight enough vein for the medication to flow inside of the selected blood vessel without any obstruction. The problem of peripheral difficult venous access arises when patient’s veins are not visible due to any reason like dark skin tone, presence of hair, high body fat or dehydrated condition, etc. Methods To enhance the visibility of veins, near infrared imaging systems is used to assist medical staff in veins localization process. Optimum illumination is crucial to obtain a better image contrast and quality, taking into consideration the limited power and space on portable imaging systems. In this work a hyperspectral image quality assessment is done to get the optimum range of illumination for venous imaging system. A database of hyperspectral images from 80 subjects has been created and subjects were divided in to four different classes on the basis of their skin tone. In this paper the results of hyper spectral image analyses are presented in function of the skin tone of patients. For each patient, four mean images were constructed by taking mean with a spectral span of 50 nm within near infrared range, i.e. 750–950 nm. Statistical quality measures were used to analyse these images. Conclusion It is concluded that the wavelength range of 800 to 850 nm serve as the optimum illumination range to get best near infrared venous image quality for each type of skin tone. PMID:25087016
Sahib, Mouayad A.; Gambardella, Luca M.; Afzal, Wasif; Zamli, Kamal Z.
2016-01-01
Combinatorial test design is a plan of test that aims to reduce the amount of test cases systematically by choosing a subset of the test cases based on the combination of input variables. The subset covers all possible combinations of a given strength and hence tries to match the effectiveness of the exhaustive set. This mechanism of reduction has been used successfully in software testing research with t-way testing (where t indicates the interaction strength of combinations). Potentially, other systems may exhibit many similarities with this approach. Hence, it could form an emerging application in different areas of research due to its usefulness. To this end, more recently it has been applied in a few research areas successfully. In this paper, we explore the applicability of combinatorial test design technique for Fractional Order (FO), Proportional-Integral-Derivative (PID) parameter design controller, named as FOPID, for an automatic voltage regulator (AVR) system. Throughout the paper, we justify this new application theoretically and practically through simulations. In addition, we report on first experiments indicating its practical use in this field. We design different algorithms and adapted other strategies to cover all the combinations with an optimum and effective test set. Our findings indicate that combinatorial test design can find the combinations that lead to optimum design. Besides this, we also found that by increasing the strength of combination, we can approach to the optimum design in a way that with only 4-way combinatorial set, we can get the effectiveness of an exhaustive test set. This significantly reduced the number of tests needed and thus leads to an approach that optimizes design of parameters quickly. PMID:27829025
Monotonicity of fitness landscapes and mutation rate control.
Belavkin, Roman V; Channon, Alastair; Aston, Elizabeth; Aston, John; Krašovec, Rok; Knight, Christopher G
2016-12-01
A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.
Selection of a Brine Processor Technology for NASA Manned Missions
NASA Technical Reports Server (NTRS)
Carter, Donald L.; Gleich, Andrew F.
2016-01-01
The current ISS Water Recovery System (WRS) reclaims water from crew urine, humidity condensate, and Sabatier product water. Urine is initially processed by the Urine Processor Assembly (UPA) which recovers 75% of the urine as distillate. The remainder of the water is present in the waste brine which is currently disposed of as trash on ISS. For future missions this additional water must be reclaimed due to the significant resupply penalty for missions beyond Low Earth Orbit (LEO). NASA has pursued various technology development programs for a brine processor in the past several years. This effort has culminated in a technology down-select to identify the optimum technology for future manned missions. The technology selection is based on various criteria, including mass, power, reliability, maintainability, and safety. Beginning in 2016 the selected technology will be transitioned to a flight hardware program for demonstration on ISS. This paper summarizes the technology selection process, the competing technologies, and the rationale for the technology selected for future manned missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemp, B.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kofler, J.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooley, R.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
A microcontroller-based three degree-of-freedom manipulator testbed. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Robert Michael, Jr.
1995-01-01
A wheeled exploratory vehicle is under construction at the Mars Mission Research Center at North Carolina State University. In order to serve as more than an inspection tool, this vehicle requires the ability to interact with its surroundings. A crane-type manipulator, as well as the necessary control hardware and software, has been developed for use as a sample gathering tool on this vehicle. The system is controlled by a network of four Motorola M68HC11 microcontrollers. Control hardware and software were developed in a modular fashion so that the system can be used to test future control algorithms and hardware. Actuators include three stepper motors and one solenoid. Sensors include three optical encoders and one cable tensiometer. The vehicle supervisor computer provides the manipulator system with the approximate coordinates of the target object. This system maps the workspace surrounding the given location by lowering the claw, along a set of evenly spaced vertical lines, until contact occurs. Based on this measured height information and prior knowledge of the target object size, the system determines if the object exists in the searched area. The system can find and retrieve a 1.25 in. diameter by 1.25 in. tall cylinder placed within the 47.5 sq in search area in less than 12 minutes. This manipulator hardware may be used for future control algorithm verification and serves as a prototype for other manipulator hardware.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
How to create successful Open Hardware projects — About White Rabbits and open fields
NASA Astrophysics Data System (ADS)
van der Bij, E.; Arruat, M.; Cattin, M.; Daniluk, G.; Gonzalez Cobas, J. D.; Gousiou, E.; Lewis, J.; Lipinski, M. M.; Serrano, J.; Stana, T.; Voumard, N.; Wlostowski, T.
2013-12-01
CERN's accelerator control group has embraced ''Open Hardware'' (OH) to facilitate peer review, avoid vendor lock-in and make support tasks scalable. A web-based tool for easing collaborative work was set up and the CERN OH Licence was created. New ADC, TDC, fine delay and carrier cards based on VITA and PCI-SIG standards were designed and drivers for Linux were written. Often industry was paid for developments, while quality and documentation was controlled by CERN. An innovative timing network was also developed with the OH paradigm. Industry now sells and supports these designs that find their way into new fields.
Arizona Geology Trip - February 25-28, 2008
NASA Technical Reports Server (NTRS)
Thomas, Gretchen A.; Ross, Amy J.
2008-01-01
A variety of hardware developers, crew, mission planners, and headquarters personnel traveled to Gila Bend, Arizona, in February 2008 for a CxP Lunar Surface Systems Team geology experience. Participating in this field trip were the CxP Space Suit System (EC5) leads: Thomas (PLSS) and Ross (PGS), who presented the activities and findings learned from being in the field during this KC. As for the design of a new spacesuit system, this allowed the engineers to understand the demands this type of activity will have on NASA's hardware, systems, and planning efforts. The engineers also experienced the methods and tools required for lunar surface activity.
Optimal Design of Functionally Graded Metallic Foam Insulations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Sankar, Bhavani; Venkataraman, Satchi; Zhu, Huadong
2002-01-01
The focus of our work has been on developing an insight into the physics that govern the optimum design of thermal insulation for use in thermal protection systems of launch vehicle. Of particular interest was to obtain optimality criteria for designing foam insulations that have density (or porosity) distributions through the thickness for optimum thermal performance. We investigate the optimum design of functionally graded thermal insulation for steady state heat transfer through the foam. We showed that the heat transfer in the foam has competing modes, of radiation and conduction. The problem assumed a fixed inside temperature of 400 K and varied the aerodynamic surface heating on the outside surface from 0.2 to 1.0 MW/sq m. The thermal insulation develops a high temperature gradient through the thickness. Investigation of the model developed for heat conduction in foams showed that at high temperatures (as on outside wall) intracellular radiation dominates the heat transfer in the foam. Minimizing radiation requires reducing the pore size, which increases the density of the foam. At low temperatures (as on the inside wall), intracellular conduction (of the metal and air) dominates the heat transfer. Minimizing conduction requires increasing the pore size. This indicated that for every temperature there was an optimum value of density that minimized the heat transfer coefficient. Two optimization studies were performed. One was to minimize the heat transmitted though a fixed thickness insulation by varying density profiles. The second was to obtain the minimum mass insulation for specified thickness. Analytical optimality criteria were derived for the cases considered. The optimality condition for minimum heat transfer required that at each temperature we find the density that minimizes the heat transfer coefficient. Once a relationship between the optimum heat transfer coefficient and the temperature was found, the design problem reduced to the solution of a simple nonlinear differential equation. Preliminary results of this work were presented at the American Society of Composites meeting, and the final version was submitted for publication in the AIAA Journal. In addition to minimizing the transmitted heat, we investigated the optimum design for minimum weight given an acceptable level of heat transmission through the insulation. The optimality criterion developed was different from that obtained for minimizing beat transfer coefficient. For minimum mass design, we had to find for a given temperature the optimum density, which minimized the logarithmic derivative of the insulation thermal conductivity with respect to its density. The logarithmic derivative is defined as the ratio of relative change in the dependent response (thermal conductivity) to the relative change in the independent variable (density). The results have been documented as a conference paper that will be presented at the upcoming AIAA.
Low cost digital electronics for isotope analysis with microcalorimeters - final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Hennig
2006-09-11
The overall goal of the Phase I research was to demonstrate that the digital readout electronics and filter algorithms developed by XIA for use with HPGe detectors can be adapted to high precision, cryogenic gamma detectors (microcalorimeters) and not only match the current state of the art in terms of energy resolution, but do so at a significantly reduced cost. This would make it economically feasible to instrument large arrays of microcalorimeters and would also allow automation of the setup, calibration and operation of large numbers of channels through software. We expected, and have demonstrated, that this approach would furthermore » allow much higher count rates than the optimum filter algorithms currently used. In particular, in measurements with a microcalorimeter at LLNL, the adapted Pixie-16 spectrometer achieved an energy resolution of 0.062%, significantly better than the targeted resolution of 0.1% in the Phase I proposal and easily matching resolutions obtained with LLNL readout electronics and optimum filtering (0.066%). The theoretical maximum output count rate for the filter settings used to achieve this resolution is about 120cps. If the filter is adjusted for maximum throughput with an energy resolution of 0.1% or better, rates of 260cps are possible. This is 20-50 times higher than the maximum count rates of about 5cps with optimum filters for this detector. While microcalorimeter measurements were limited to count rates of ~1.3cps due to the strength of available sources, pulser measurements demonstrated that measured energy resolutions were independent of counting rate to output counting rates well in excess of 200cps or more.. We also developed a preliminary hardware design of a spectrometer module, consisting of a digital processing core and several input options that can be implemented on daughter boards. Depending upon the daughter board, the total parts cost per channel ranged between $12 and $27, resulting in projected product prices of $80 to $160 per channel. This demonstrates that a price of $100 per channel is economically very feasible for large microcalorimeter arrays.« less
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Extended Duration Orbiter Medical Project
NASA Technical Reports Server (NTRS)
Sawin, Charles F. (Editor); Taylor, Gerald R. (Editor); Smith, Wanda L. (Editor); Brown, J. Travis (Technical Monitor)
1999-01-01
Biomedical issues have presented a challenge to flight physicians, scientists, and engineers ever since the advent of high-speed, high-altitude airplane flight in the 1940s. In 1958, preparations began for the first manned space flights of Project Mercury. The medical data and flight experience gained through Mercury's six flights and the Gemini, Apollo, and Skylab projects, as well as subsequent space flights, comprised the knowledge base that was used to develop and implement the Extended Duration Orbiter Medical Project (EDOMP). The EDOMP yielded substantial amounts of data in six areas of space biomedical research. In addition, a significant amount of hardware was developed and tested under the EDOMP. This hardware was designed to improve data gathering capabilities and maintain crew physical fitness, while minimizing the overall impact to the microgravity environment. The biomedical findings as well as the hardware development results realized from the EDOMP have been important to the continuing success of extended Space Shuttle flights and have formed the basis for medical studies of crew members living for three to five months aboard the Russian space station, Mir. EDOMP data and hardware are also being used in preparation for the construction and habitation of International Space Station. All data sets were grouped to be non-attributable to individuals, and submitted to NASA s Life Sciences Data Archive.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Current and future graphics requirements for LaRC and proposed future graphics system
NASA Technical Reports Server (NTRS)
Taylor, N. L.; Bowen, J. T.; Randall, D. P.; Gates, R. L.
1984-01-01
The findings of an investigation to assess the current and future graphics requirements of the LaRC researchers with respect to both hardware and software are presented. A graphics system designed to meet these requirements is proposed.
The Optimum Level of Argumentativeness for Employed Women.
ERIC Educational Resources Information Center
Schullery, Nancy M.
1998-01-01
Examines the relationship between argumentativeness and women's supervisory level in organizations. Finds no simple relationship between supervisory level and argumentativeness for women, but indicates that moderation in argumentativeness increases with supervisory level. Notes implications for pedagogy: would-be female executives should be…
Evaluating the performance of microbial fuel cells powering electronic devices
NASA Astrophysics Data System (ADS)
Dewan, Alim; Donovan, Conrad; Heo, Deukhyoun; Beyenal, Haluk
A microbial fuel cell (MFC) is capable of powering an electronic device if we store the energy in an external storage device, such as a capacitor, and dispense that energy intermittently in bursts of high-power when needed. Therefore its performance needs to be evaluated using an energy-storing device such as a capacitor which can be charged and discharged rather than other evaluation techniques, such as continuous energy dissipation through a resistor. In this study, we develop a method of testing microbial fuel cell performance based on storing energy in a capacitor. When a capacitor is connected to a MFC it acts like a variable resistor and stores energy from the MFC at a variable rate. In practice the application of this method to testing microbial fuel cells is very challenging and time consuming; therefore we have custom-designed a microbial fuel cell tester (MFCT). The MFCT evaluates the performance of a MFC as a power source. It uses a capacitor as an energy storing device and waits until a desired amount of energy is stored then discharges the capacitor. The entire process is controlled using an analog-to-digital converter (ADC) board controlled by a custom-written computer program. The utility of our method and the MFCT is demonstrated using a laboratory microbial fuel cell (LMFC) and a sediment microbial fuel cell (SMFC). We determine (1) how frequently a MFC can charge a capacitor, (2) which electrode is current-limiting, (3) what capacitor value will allow the maximum harvested energy from a MFC, which is called the "optimum charging capacitor value," and (4) what capacitor charging potential will harvest the maximum energy from a MFC, which is called the "optimum charging potential." Using a LMFC we find that (1) the time needed to charge a 3-F capacitor from 0 to 500 mV is 108 min, (2) the optimum charging capacitor value is 3 F, and (3) the optimum charging potential is 300 mV. Using a SMFC we find that (1) the time needed to charge a 3-F capacitor from 0 to 500 mV is 5 min, (2) the optimum charging capacitor value is 3 F, and (3) the optimum charging potential is 500 mV. Our results demonstrate that the developed method and the MFCT can be used to evaluate and optimize energy harvesting when a MFC is used with a capacitor to power wireless sensors monitoring the environment.
Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.
Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C
2016-04-01
As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7 patients who underwent surgery for ASD after a remote fusion. These patients later developed a fracture of the fusion mass after hardware removal from their previously successfully fused segment. All patients had a high sagittal imbalance and had previously undergone multiple spinal operations. The development of a spontaneous fracture of the fusion mass may be related to sagittal imbalance. Consideration should be given to reimplanting hardware for these patients, even across good fusions, to prevent spontaneous fracture of these areas if the sagittal imbalance is not corrected.
Effect of colloidal silica on rheological properties of common pharmaceutical excipients.
Majerová, Diana; Kulaviak, Lukáš; Růžička, Marek; Štěpánek, František; Zámostný, Petr
2016-09-01
In pharmaceutical industry, the use of lubricants is mostly based on historical experiences or trial and error methods even these days. It may be demanding in terms of the material consumption and may result in sub-optimal drug composition. Powder rheology enables more accurate monitoring of the flow properties and because the measurements need only a small sample it is perfectly suitable for the rare or expensive substances. In this work, rheological properties of four common excipients (pregelatinized maize starch, microcrystalline cellulose, croscarmellose sodium and magnesium stearate) were studied by the FT4 Powder Rheometer, which was used for measuring the compressibility index by a piston and flow properties of the powders by a rotational shear cell. After an initial set of measurements, two excipients (pregelatinized maize starch and microcrystalline cellulose) were chosen and mixed, in varying amounts, with anhydrous colloidal silicon dioxide (Aerosil 200) used as a glidant. The bulk (conditioned and compressed densities, compressibility index), dynamic (basic flowability energy) and shear (friction coefficient, flow factor) properties were determined to find an optimum ratio of the glidant. Simultaneously, the particle size data were obtained using a low-angle laser light scattering (LALLS) system and scanning electron microscopy was performed in order to examine the relationship between the rheological properties and the inner structure of the materials. The optimum of flowability for the mixture composition was found, to correspond to empirical findings known from general literature. In addition the mechanism of colloidal silicone dioxide action to improve flowability was suggested and the hypothesis was confirmed by independent test. New findings represent a progress towards future application of determining the optimum concentration of glidant from the basic characteristics of the powder in the pharmaceutical research and development. Copyright © 2016 Elsevier B.V. All rights reserved.
Takahashi, Shotaro; Tanaka, Nobuyuki; Okimoto, Tomoaki; Tanaka, Toshiki; Ueda, Kazuhiro; Matsumoto, Tsuneo; Ashizawa, Kazuto; Kunihiro, Yoshie; Kido, Shoji; Matsunaga, Naofumi
2012-04-01
To identify the optimum follow-up period for pure ground-glass nodules (GGN) measuring less than 15 mm in diameter, and to evaluate whether the initial HRCT findings can be used as predictors for the progression of pure GGN. A total of 150 pure GGNs present in 111 patients were evaluated. The series of HRCT images for each GGN at the time of the initial detection, 2 years after detection, and at the final follow-up were evaluated. The HRCT findings of GGN were compared between the "increasing nodule" and "non-increasing nodule" groups. Most (87.3%) pure GGN did not increase whereas some nodules (12.7%) eventually increased after long-term follow-up (mean 66.0 ± 25.0 months). Six (31.6%) out of the 19 increasing nodules were regarded as stable at the 2 year follow-up examination. Some morphological findings on initial HRCT, including a size greater than 10 mm (p = 0.001), lobulated margins (p = 0.015), and a bubble-like appearance (p = 0.002), were significantly associated with the growth of pure GGNs. More than 2 years of follow-up are necessary to detect the growth of pure GGNs. Some characteristic findings indicated a high likelihood of future growth of the GGN.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.
Clouds Versus Carbon: Predicting Vegetation Roughness by Maximizing Productivity
NASA Technical Reports Server (NTRS)
Olsen, Lola M.
2004-01-01
Surface roughness is one of the dominant vegetation properties that affects land surface exchange of energy, water, carbon, and momentum with the overlying atmosphere. We hypothesize that the canopy structure of terrestrial vegetation adapts optimally to climate by maximizing productivity, leading to an optimum surface roughness. An optimum should exist because increasing values of surface roughness cause increased surface exchange, leading to increased supply of carbon dioxide for photosynthesis. At the same time, increased roughness enhances evapotranspiration and cloud cover, thereby reducing the supply of photosynthetically active radiation. We demonstrate the optimum through sensitivity simulations using a coupled dynamic vegetation-climate model for present day conditions, in which we vary the value of surface roughness for vegetated surfaces. We find that the maximum in productivity occurs at a roughness length of 2 meters, a value commonly used to describe the roughness of today's forested surfaces. The sensitivity simulations also illustrate the strong climatic impacts of vegetation roughness on the energy and water balances over land: with increasing vegetation roughness, solar radiation is reduced by up to 20 W/sq m in the global land mean, causing shifts in the energy partitioning and leading to general cooling of the surface by 1.5 K. We conclude that the roughness of vegetated surfaces can be understood as a reflection of optimum adaptation, and it is associated with substantial changes in the surface energy and water balances over land. The role of the cloud feedback in shaping the optimum underlines the importance of an integrated perspective that views vegetation and its adaptive nature as an integrated component of the Earth system.
NASA Astrophysics Data System (ADS)
Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.
2017-12-01
Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.
Effects of long-term exposure on LDEF fastener assemblies
NASA Astrophysics Data System (ADS)
Spear, Steve; Dursch, Harry
1992-09-01
This presentation summarizes the Systems Special Investigations Group (SIG) findings from testing and analysis of fastener assemblies used on the Long Duration Exposure Facility (LDEF) structure, the tray mounting clamps, and by the various experimenters. The LDEF deintegration team and several experimenters noted severe fastener damage and hardware removal difficulties during post-flight activities. The System SIG has investigated all reported instances, and in all cases examined to date, the difficulties were attributed to galling during installation or post-flight removal. To date, no evidence of coldwelding was found. Correct selection of materials and lubricants as well as proper mechanical procedures is essential to ensure successful on-orbit or post-flight installation and removal of hardware.
Effects of long-term exposure on LDEF fastener assemblies
NASA Technical Reports Server (NTRS)
Spear, Steve; Dursch, Harry
1992-01-01
This presentation summarizes the Systems Special Investigations Group (SIG) findings from testing and analysis of fastener assemblies used on the Long Duration Exposure Facility (LDEF) structure, the tray mounting clamps, and by the various experimenters. The LDEF deintegration team and several experimenters noted severe fastener damage and hardware removal difficulties during post-flight activities. The System SIG has investigated all reported instances, and in all cases examined to date, the difficulties were attributed to galling during installation or post-flight removal. To date, no evidence of coldwelding was found. Correct selection of materials and lubricants as well as proper mechanical procedures is essential to ensure successful on-orbit or post-flight installation and removal of hardware.
Space Shuttle STS-1 SRB damage investigation
NASA Technical Reports Server (NTRS)
Nevins, C. D.
1982-01-01
The physical damage incurred by the solid rocket boosters during reentry on the initial space shuttle flight raised the question of whether the hardware, as designed, would yield the low cost per flight desired. The damage was quantified, the cause determined and specific design changes recommended which would preclude recurrence. Flight data, postflight analyses, and laboratory hardware examinations were used. The resultant findings pointed to two principal causes: failure of the aft skirt thermal curtain at the onset of reentry aerodynamic heating, and overloading of the aft shirt stiffening rings during water impact. Design changes were recommended on both the thermal curtain and the aft skirt structural members to prevent similar damage on future missions.
2010-04-08
S131-E-008357 (9 April 2010) --- NASA astronaut Dorothy Metcalf-Lindenburger, STS-131 mission specialist, finds floating room hard to come by inside the multi-purpose logistics module Leonardo, which is filled with supplies and hardware for the International Space Station, to which it is temporarily docked.
Tuple spaces in hardware for accelerated implicit routing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent; Tripp, Justin
2010-12-01
Organizing and optimizing data objects on networks with support for data migration and failing nodes is a complicated problem to handle as systems grow. The goal of this work is to demonstrate that high levels of speedup can be achieved by moving responsibility for finding, fetching, and staging data into an FPGA-based network card. We present a system for implicit routing of data via FPGA-based network cards. In this system, data structures are requested by name, and the network of FPGAs finds the data within the network and relays the structure to the requester. This is acheived through successive examinationmore » of hardware hash tables implemented in the FPGA. By avoiding software stacks between nodes, the data is quickly fetched entirely through FPGA-FPGA interaction. The performance of this system is orders of magnitude faster than software implementations due to the improved speed of the hash tables and lowered latency between the network nodes.« less
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Ippolito, Corey; Plice, Laura; Young, Larry A.; Lau, Benton
2003-01-01
This paper details the development and demonstration of an autonomous aerial vehicle embodying search and find mission planning and execution srrategies inspired by foraging behaviors found in biology. It begins by describing key characteristics required by an aeria! explorer to support science and planetary exploration goals, and illustrates these through a hypothetical mission profile. It next outlines a conceptual bio- inspired search and find autonomy architecture that implements observations, decisions, and actions through an "ecology" of producer, consumer, and decomposer agents. Moving from concepts to development activities, it then presents the results of mission representative UAV aerial surveys at a Mars analog site. It next describes hardware and software enhancements made to a commercial small fixed-wing UAV system, which inc!nde a ncw dpvelopnent architecture that also provides hardware in the loop simulation capability. After presenting the results of simulated and actual flights of bioinspired flight algorithms, it concludes with a discussion of future development to include an expansion of system capabilities and field science support.
Greenwood, Duncan J.; Mckee, John M. T.; Fuller, Deborah P.; Burns, Ian G.; Mulholland, Barry J.
2007-01-01
Background and Aims Growth of bedding plants, in small peat plugs, relies on nutrients in the irrigation solution. The object of the study was to find a way of modifying the nutrient supply so that good-quality seedlings can be grown rapidly and yet have the high root : shoot ratios essential for efficient transplanting. Methods A new procedure was devised in which the concentrations of nutrients in the irrigation solution were modified during growth according to changing plant demand, instead of maintaining the same concentrations throughout growth. The new procedure depends on published algorithms for the dependence of growth rate and optimal plant nutrient concentrations on shoot dry weight Ws (g m−2), and on measuring evapotranspiration rates and shoot dry weights at weekly intervals. Pansy, Viola tricola ‘Universal plus yellow’ and petunia, Petunia hybrida ‘Multiflora light salmon vein’ were grown in four independent experiments with the expected optimum nutrient concentration and fractions of the optimum. Root and shoot weights were measured during growth. Key Results For each level of nutrient supply Ws increased with time (t) in days, according to the equation ΔWs/Δt=K2Ws/(100+Ws) in which the growth rate coefficient (K2) remained approximately constant throughout growth. The value of K2 for the optimum treatment was defined by incoming radiation and temperature. The value of K2 for each sub-optimum treatment relative to that for the optimum treatment was logarithmically related to the sub-optimal nutrient supply. Provided the aerial environment was optimal, Rsb/Ro≈Wo/Wsb where R is the root : shoot ratio, W is the shoot dry weight, and sb and o indicate sub-optimum and optimum nutrient supplies, respectively. Sub-optimal nutrient concentrations also depressed shoot growth without appreciably affecting root growth when the aerial environment was non-limiting. Conclusion The new procedure can predict the effects of nutrient supply, incoming radiation and temperature on the time course of shoot growth and the root : shoot ratio for a range of growing conditions. PMID:17210608
Computer-aided design of high-contact-ratio gears for minimum dynamic load and stress
NASA Technical Reports Server (NTRS)
Lin, Hsiang Hsi; Lee, Chinwai; Oswald, Fred B.; Townsend, Dennis P.
1990-01-01
A computer aided design procedure is presented for minimizing dynamic effects on high contact ratio gears by modification of the tooth profile. Both linear and parabolic tooth profile modifications of high contact ratio gears under various loading conditions are examined and compared. The effects of the total amount of modification and the length of the modification zone were systematically studied at various loads and speeds to find the optimum profile design for minimizing the dynamic load and the tooth bending stress. Parabolic profile modification is preferred over linear profile modification for high contact ratio gears because of its lower sensitivity to manufacturing errors. For parabolic modification, a greater amount of modification at the tooth tip and a longer modification zone are required. Design charts are presented for high contact ratio gears with various profile modifications operating under a range of loads. A procedure is illustrated for using the charts to find the optimum profile design.
Launders, J H; McArdle, S; Workman, A; Cowen, A R
1995-01-01
The significance of varying the viewing conditions that may affect the perceived threshold contrast of X-ray television fluoroscopy systems has been investigated. Factors investigated include the ambient room lighting and the viewing distance. The purpose of this study is to find the optimum viewing protocol with which to measure the threshold detection index. This is a particular problem when trying to compare the image quality of television fluoroscopy systems in different input field sizes. The results show that the viewing distance makes a significant difference to the perceived threshold contrast, whereas the ambient light conditions make no significant difference. Experienced observers were found to be capable of finding the optimum viewing distance for detecting details of each size, in effect using a flexible viewing distance. This allows the results from different field sizes to be normalized to account for both the magnification and the entrance air kerma rate differences, which in turn allow for a direct comparison of performance in different field sizes.
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Pancha, Imran; Chokshi, Kaumeel; Ghosh, Tonmoy; Paliwal, Chetan; Maurya, Rahulkumar; Mishra, Sandhya
2015-10-01
The aim of the present study was to find out the optimum sodium bicarbonate concentration to produce higher biomass with higher lipid and carbohydrate contents in microalgae Scenedesmus sp. CCNM 1077. The role of bicarbonate supplementation under different nutritional starvation conditions was also evaluated. The results clearly indicate that 0.6 g/L sodium bicarbonate was optimum concentration resulting in 20.91% total lipid and 25.56% carbohydrate along with 23% increase in biomass production compared to normal growth condition. Addition of sodium bicarbonate increased the activity of nutrient assimilatory enzymes, biomass, lipid and carbohydrate contents under different nutritional starvation conditions. Nitrogen starvation with bicarbonate supplementation resulted in 54.03% carbohydrate and 34.44% total lipid content in microalgae Scenedesmus sp. CCNM 1077. These findings show application of bicarbonate grown microalgae Scenedesmus sp. CCNM 1077 as a promising feedstock for biodiesel and bioethanol production. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Terrain modeling for real-time simulation
NASA Astrophysics Data System (ADS)
Devarajan, Venkat; McArthur, Donald E.
1993-10-01
There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.
Marschall, Jonas; Lane, Michael A.; Beekmann, Susan E.; Polgreen, Philip; Babcock, Hilary M.
2013-01-01
There is a dearth of guidance on the management of prosthetic joint infections (PJIs), in particular because of the lack of high-quality evidence for optimal antibiotics. Thus, we designed a nine-question survey of current practices and preferences among members of the Emerging Infections Network, a CDC-sponsored network of infectious diseases physicians, which was distributed in May 2012. In total, 556 (47.2%) of 1178 network members responded. As first-line antibiotic choice for MSSA PJI, 59% of responders indicated oxacillin/nafcillin, 33% cefazolin and 7% ceftriaxone; the commonest alternative was cefazolin (46%). For MRSA PJI, 90% preferred vancomycin, 7% daptomycin and 0.8% ceftaroline; the commonest alternative was daptomycin (65%). Antibiotic selection for coagulase-negative staphylococci varied depending on meticillin susceptibility. For staphylococcal PJIs with retained hardware, most providers would add rifampicin. Propionibacterium is usually treated with vancomycin (40%), penicillin (23%) or ceftriaxone (17%). Most responders thought 10–19% of all PJIs were culture-negative. Culture-negative PJIs of the lower extremities are usually treated with a vancomycin/fluoroquinolone combination, and culture-negative shoulder PJIs with vancomycin/ceftriaxone. The most cited criteria for selecting antibiotics were ease of administration and the safety profile. A treatment duration of 6–8 weeks is preferred (by 77% of responders) and is mostly guided by clinical response and inflammatory markers. Ninety-nine percent of responders recommend oral antibiotic suppression (for varying durations) in patients with retained hardware. In conclusion, there is considerable variation in treatment of PJIs both with identified pathogens and those with negative cultures. Future studies should aim to identify optimum treatment strategies. PMID:23312602
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
Algorithms for optimization of branching gravity-driven water networks
NASA Astrophysics Data System (ADS)
Dardani, Ian; Jones, Gerard F.
2018-05-01
The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.
A new mathematical modeling approach for the energy of threonine molecule
NASA Astrophysics Data System (ADS)
Sahiner, Ahmet; Kapusuz, Gulden; Yilmaz, Nurullah
2017-07-01
In this paper, we propose an improved new methodology in energy conformation problems for finding optimum energy values. First, we construct the Bezier surfaces near local minimizers based on the data obtained from Density Functional Theory (DFT) calculations. Second, we blend the constructed surfaces in order to obtain a single smooth model. Finally, we apply the global optimization algorithm to find two torsion angles those make the energy of the molecule minimum.
Electronic processing and control system with programmable hardware
NASA Technical Reports Server (NTRS)
Alkalaj, Leon (Inventor); Fang, Wai-Chi (Inventor); Newell, Michael A. (Inventor)
1998-01-01
A computer system with reprogrammable hardware allowing dynamically allocating hardware resources for different functions and adaptability for different processors and different operating platforms. All hardware resources are physically partitioned into system-user hardware and application-user hardware depending on the specific operation requirements. A reprogrammable interface preferably interconnects the system-user hardware and application-user hardware.
NASA Astrophysics Data System (ADS)
Barthelat, Francois
2014-12-01
Nacre, bone and spider silk are staggered composites where inclusions of high aspect ratio reinforce a softer matrix. Such staggered composites have emerged through natural selection as the best configuration to produce stiffness, strength and toughness simultaneously. As a result, these remarkable materials are increasingly serving as model for synthetic composites with unusual and attractive performance. While several models have been developed to predict basic properties for biological and bio-inspired staggered composites, the designer is still left to struggle with finding optimum parameters. Unresolved issues include choosing optimum properties for inclusions and matrix, and resolving the contradictory effects of certain design variables. Here we overcome these difficulties with a multi-objective optimization for simultaneous high stiffness, strength and energy absorption in staggered composites. Our optimization scheme includes material properties for inclusions and matrix as design variables. This process reveals new guidelines, for example the staggered microstructure is only advantageous if the tablets are at least five times stronger than the interfaces, and only if high volume concentrations of tablets are used. We finally compile the results into a step-by-step optimization procedure which can be applied for the design of any type of high-performance staggered composite and at any length scale. The procedure produces optimum designs which are consistent with the materials and microstructure of natural nacre, confirming that this natural material is indeed optimized for mechanical performance.
Human equivalent power: towards an optimum energy level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafner, E.
1979-01-01
How much energy would be needed to support the average individual in an efficient technological culture. Present knowledge provides information about minimum dietary power needs; but so far we have not been able to find ways of analyzing other human needs which, in a civilized society, rise far above the power of metabolism. Thus we understand the level at its minimum but not at its optimum. This paper attempts to quantify an optimum power level for civilized society. The author describes a method he uses in seminars to quantify how many servants in units of human equivalent power (HEP) aremore » needed to supply a person in a upper-middle-class lifestyle. Typical seminar participants determine a per-capita power budget of 15 HEPs (perfect servants) would be required. Each human being on earth today is, according to the author, the master of forty slaves; in the U.S., he says, the number is close to 200. He concludes that a highly civilized standard of living may be closely associated with an optimum per capita power budget of 1500 watts; and since the average individual in the U.S. participates in energy turnover at almost ten times the rate he knows intuitively to be reasonable, reformation of American power habits will require reconstruction that shakes the house from top to bottom.« less
NASA Astrophysics Data System (ADS)
Ibrahim, Raheek I.; Wong, Z. H.; Mohammad, A. W.
2015-04-01
Palm oil mill effluent (POME) wastewater was produced in huge amounts in Malaysia, and if it discharged into the environment, it causes a serious problem regarding its high content of nutrients. This study was devoted to POME wastewater treatment with microalgae. The main objective was to find the optimum conditions (retention time, and pH) in the microalgae treatment of POME wastewater considering retention time as a most important parameter in algae treatment, since after the optimum conditions there is a diverse effect of time and pH and so, the process becomes costly. According to our knowledge, there is no existing study optimized the retention time and pH with % removal of nutrients (ammonia nitrogen NH3-N, and orthophosphorous PO43-) for microalgae treatment of POME wastewater. In order to achieve with optimization, a central composite rotatable design with a second order polynomial model was used, regression coefficients and goodness of fit results in removal percentages of nutrients (NH3-N, and PO43-) were estimated.WinQSB technique was used to optimize the surface response objective functionfor the developed model. Also experiments were done to validate the model results.The optimum conditions were found to be 18 day retention time for ammonia nitrogen, and pH of 9.22, while for orthophosphorous, 15 days were indicated as the optimum retention time with a pH value of 9.2.
Rotor vibration reduction with polymeric sectors
NASA Astrophysics Data System (ADS)
Dutt, J. K.; Toi, T.
2003-05-01
This work has been undertaken principally with an idea to improving the dynamic performance of rotor-shaft systems, which often suffer from two major problems (a) resonance and (b) loss of stability, resulting in excessive vibration of such systems. Polymeric material in the form of sectors has been considered in this work as bearing supports. Polymeric material has been considered in this work as both stiffness and loss factor of such materials varies with the frequency of excitation. Stiffness and loss factor have been found out for the proposed support system comprising of polymeric sectors. Depending upon the frequency of excitation the system matrix, in this case, changes and dynamic performance of the rotor-shaft system also changes accordingly. Here in this work avoidance of resonance and application of optimum damping in the support have been investigated by finding out the optimum dimension, i.e., the optimum thickness and optimum length of the sectors. It has been theoretically found that use of such sectors reduces the rotor unbalanced response, increases the stability limit speed for simple rotor-shaft systems and thus improves the dynamic characteristics. Parameters of the system have been presented in terms of non-dimensional quantities. Many examples have been presented in support of the conclusion. The life of such supports, particularly in the presence of chemicals and other reagents has not been investigated.
Enantioselective synthesis of (S)-naproxen using immobilized lipase on chitosan beads.
Gilani, Saeedeh L; Najafpour, Ghasem D; Heydarzadeh, Hamid D; Moghadamnia, Aliakbar
2017-06-01
S-naproxen by enantioselective hydrolysis of racemic naproxen methyl ester was produced using immobilized lipase. The lipase enzyme was immobilized on chitosan beads, activated chitosan beads by glutaraldehyde, and Amberlite XAD7. In order to find an appropriate support for the hydrolysis reaction of racemic naproxen methyl ester, the conversion and enantioselectivity for all carriers were compared. In addition, effects of the volumetric ratio of two phases in different organic solvents, addition of cosolvent and surfactant, optimum pH and temperature, reusability, and inhibitory effect of methanol were investigated. The optimum volumetric ratio of two phases was defined as 3:2 of aqueous phase to organic phase. Various water miscible and water immiscible solvents were examined. Finally, isooctane was chosen as an organic solvent, while 2-ethoxyethanol was added as a cosolvent in the organic phase of the reaction mixture. The optimum reaction conditions were determined to be 35 °C, pH 7, and 24 h. Addition of Tween-80 in the organic phase increased the accessibility of immobilized enzyme to the reactant. The optimum organic phase compositions using a volumetric ratio of 2-ethoxyethanol, isooctane and Tween-80 were 3:7 and 0.1% (v/v/v), respectively. The best conversion and enantioselectivity of immobilized enzyme using chitosan beads activated by glutaraldehyde were 0.45 and 185, respectively. © 2017 Wiley Periodicals, Inc.
Optimum Waveforms for Differential Ion Mobility Spectrometry (FAIMS)
Shvartsburg, Alexandre A.; Smith, Richard D.
2009-01-01
Differential mobility spectrometry or field asymmetric waveform ion mobility spectrometry (FAIMS) is a new tool for separation and identification of gas-phase ions, particularly in conjunction with mass-spectrometry. In FAIMS, ions are filtered by the difference between mobilities in gases (K) at high and low electric field intensity (E) using asymmetric waveforms. An infinite number of possible waveform profiles make maximizing the performance within engineering constraints a major issue for FAIMS technology refinement. Earlier optimizations assumed the non-constant component of mobility to scale as E2, producing the same result for all ions. Here we show that the optimum profiles are defined by the full series expansion of K(E) that includes terms beyond the 1st that is proportional to E2. For many ion/gas pairs, the first two terms have different signs, and the optimum profiles at sufficiently high E in FAIMS may differ substantially from those previously reported, improving the resolving power by up to 2.2 times. This situation arises for some ions in all FAIMS systems, but becomes more common in recent miniaturized devices that employ higher E. With realistic K(E) dependences, the maximum waveform amplitude is not necessarily optimum and reducing it by up to ∼20 – 30% is beneficial in some cases. The present findings are particularly relevant to targeted analyses where separation depends on the difference between K(E) functions for specific ions. PMID:18585054
Efficient hash tables for network applications.
Zink, Thomas; Waldvogel, Marcel
2015-01-01
Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.
Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes.
Salomons, Etto L; Havinga, Paul J M; van Leeuwen, Henk
2016-09-27
A wireless sensor network that consists of nodes with a sound sensor can be used to obtain context awareness in home environments. However, the limited processing power of wireless nodes offers a challenge when extracting features from the signal, and subsequently, classifying the source. Although multiple papers can be found on different methods of sound classification, none of these are aimed at limited hardware or take the efficiency of the algorithms into account. In this paper, we compare and evaluate several classification methods on a real sensor platform using different feature types and classifiers, in order to find an approach that results in a good classifier that can run on limited hardware. To be as realistic as possible, we trained our classifiers using sound waves from many different sources. We conclude that despite the fact that the classifiers are often of low quality due to the highly restricted hardware resources, sufficient performance can be achieved when (1) the window length for our classifiers is increased, and (2) if we apply a two-step approach that uses a refined classification after a global classification has been performed.
Effect of data truncation in an implementation of pixel clustering on a custom computing machine
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Theiler, James P.; Estlick, Michael; Kitaryeva, Natalya V.; Szymanski, John J.
2000-10-01
We investigate the effect of truncating the precision of hyperspectral image data for the purpose of more efficiently segmenting the image using a variant of k-means clustering. We describe the implementation of the algorithm on field-programmable gate array (FPGA) hardware. Truncating the data to only a few bits per pixel in each spectral channel permits a more compact hardware design, enabling greater parallelism, and ultimately a more rapid execution. It also enables the storage of larger images in the onboard memory. In exchange for faster clustering, however, one trades off the quality of the produced segmentation. We find, however, that the clustering algorithm can tolerate considerable data truncation with little degradation in cluster quality. This robustness to truncated data can be extended by computing the cluster centers to a few more bits of precision than the data. Since there are so many more pixels than centers, the more aggressive data truncation leads to significant gains in the number of pixels that can be stored in memory and processed in hardware concurrently.
Human performance interfaces in air traffic control.
Chang, Yu-Hern; Yeh, Chung-Hsing
2010-01-01
This paper examines how human performance factors in air traffic control (ATC) affect each other through their mutual interactions. The paper extends the conceptual SHEL model of ergonomics to describe the ATC system as human performance interfaces in which the air traffic controllers interact with other human performance factors including other controllers, software, hardware, environment, and organisation. New research hypotheses about the relationships between human performance interfaces of the system are developed and tested on data collected from air traffic controllers, using structural equation modelling. The research result suggests that organisation influences play a more significant role than individual differences or peer influences on how the controllers interact with the software, hardware, and environment of the ATC system. There are mutual influences between the controller-software, controller-hardware, controller-environment, and controller-organisation interfaces of the ATC system, with the exception of the controller-controller interface. Research findings of this study provide practical insights in managing human performance interfaces of the ATC system in the face of internal or external change, particularly in understanding its possible consequences in relation to the interactions between human performance factors.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.
ERIC Educational Resources Information Center
Bertot, John Carlo; McClure, Charles R.
This report describes the results of an assessment of Sailor, Maryland's Online Public Information Network, which provides statewide Internet connection to 100% of Maryland public libraries. The concept of a "statewide networked environment" includes information services, products, hardware and software, telecommunications…
Getting to the Point in Pinpoint Landing
NASA Technical Reports Server (NTRS)
1998-01-01
Assisted by Langley Research Center's Small Business Technology Transfer (STTR) Program, IntegriNautics has developed a commercialized precision landing system. The idea finds its origins in Stanford University work on a satellite test of Einstein's General Theory of Relativity, where Stanford has designed a new high-performance altitude-determining hardware.
An Application of Computerized Instructional Television in Biology.
ERIC Educational Resources Information Center
Kendrick, Bryce
Computerized instructional television was used to teach undergraduate students about 100,000 or more extant fungi through an interactive, self testing, teaching program. Students did not find this sophisticated hardware an adequate substitute for the lecture experience and ultimately gave their professor a strong vote of confidence. (Author/JEG)
Parents' Perception on De La Salle University-Dasmarinas Services
ERIC Educational Resources Information Center
Cortez-Antig, Carmelyn
2011-01-01
The study was conducted to find out the parents' perception on the De La Salle University-Dasmarinas services which are grouped as follows: (1) Academic instruction factor; (2) Quality of human ware (includes faculty, administration, staff support through medical services, guidance and discipline); (3) Quality of hardware (dorm facilities,…
Analysis of Software Systems for Specialized Computers,
computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)
NASA Astrophysics Data System (ADS)
Kushnir, A. F.; Troitsky, E. V.; Haikin, L. M.; Dainty, A.
1999-06-01
A semi-automatic procedure has been developed to achieve statistically optimum discrimination between earthquakes and explosions at local or regional distances based on a learning set specific to a given region. The method is used for step-by-step testing of candidate discrimination features to find the optimum (combination) subset of features, with the decision taken on a rigorous statistical basis. Linear (LDF) and Quadratic (QDF) Discriminant Functions based on Gaussian distributions of the discrimination features are implemented and statistically grounded; the features may be transformed by the Box-Cox transformation z=(1/ α)( yα-1) to make them more Gaussian. Tests of the method were successfully conducted on seismograms from the Israel Seismic Network using features consisting of spectral ratios between and within phases. Results showed that the QDF was more effective than the LDF and required five features out of 18 candidates for the optimum set. It was found that discrimination improved with increasing distance within the local range, and that eliminating transformation of the features and failing to correct for noise led to degradation of discrimination.
Probabilistic determination of probe locations from distance data
Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels
2013-01-01
Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585
Operating characteristics of a new ion source for KSTAR neutral beam injection system.
Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul
2014-02-01
A new positive ion source for the Korea Superconducting Tokamak Advanced Research neutral beam injection (KSTAR NBI-1) system was designed, fabricated, and assembled in 2011. The characteristics of the arc discharge and beam extraction were investigated using hydrogen and helium gas to find the optimum operating parameters of the arc power, filament voltage, gas pressure, extracting voltage, accelerating voltage, and decelerating voltage at the neutral beam test stand at the Korea Atomic Energy Research Institute in 2012. Based on the optimum operating condition, the new ion source was then conditioned, and performance tests were primarily finished. The accelerator system with enlarged apertures can extract a maximum 65 A ion beam with a beam energy of 100 keV. The arc efficiency and optimum beam perveance, at which the beam divergence is at a minimum, are estimated to be 1.0 A/kW and 2.5 uP, respectively. The beam extraction tests show that the design goal of delivering a 2 MW deuterium neutral beam into the KSTAR Tokamak plasma is achievable.
Optimizing the Dopant and Carrier Concentration of Ca5Al2Sb6 for High Thermoelectric Efficiency
Yan, Yuli; Zhang, Guangbiao; Wang, Chao; Peng, Chengxiao; Zhang, Peihong; Wang, Yuanxu; Ren, Wei
2016-01-01
The effects of doping on the transport properties of Ca5Al2Sb6 are investigated using first-principles electronic structure methods and Boltzmann transport theory. The calculated results show that a maximum ZT value of 1.45 is achieved with an optimum carrier concentration at 1000 K. However, experimental studies have shown that the maximum ZT value is no more than 1 at 1000 K. By comparing the calculated Seebeck coefficient with experimental values, we find that the low dopant solubility in this material is not conductive to achieve the optimum carrier concentration, leading a smaller experimental value of the maximum ZT. Interestingly, the calculated dopant formation energies suggest that optimum carrier concentrations can be achieved when the dopants and Sb atoms have similar electronic configurations. Therefore, it might be possible to achieve a maximum ZT value of 1.45 at 1000 K with suitable dopants. These results provide a valuable theoretical guidance for the synthesis of high-performance bulk thermoelectric materials through dopants optimization. PMID:27406178
Chase, T J; Nowicki, J P; Coker, D J
2018-06-06
In situ observations of diurnal foraging behaviour of a common site-attached shallow reef mesopredator Parapercis australis during late summer, revealed that although diet composition was unaffected by seawater temperature (range 28.3-32.4° C), feeding strikes and distance moved increased with temperature up to 30.5° C, beyond which they sharply declined, indicative of currently living beyond their thermal optimum. Diel feeding strikes and distance moved were however, tightly linked to ambient temperature as it related to the population's apparent thermal optimum, peaking at times when it was approached (1230 and 1700 hours) and declining up to four fold at times deviating from this. These findings suggest that although this population may be currently living beyond its thermal optimum, it copes by down regulating energetically costly foraging movement and consumption and under future oceanic temperatures, these behavioural modifications are probably insufficient to avoid deleterious effects on population viability without the aid of long-term acclimation or adaptation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Arboreal nests of Phenacomys longgicaudus in Oregon.
A.M. Gillesberg; A.B. Carey
1991-01-01
Searching felled trees proved effective for finding nests of Phenacomys longicaudus; 117 nests were found in 50 trees. Nests were located throughout the live crowns, but were concentrated in the lower two-thirds of the canopy. Abundance of nests increased with tree size; old-growth forests provide optimum habitat.
ERTS-A: a new apogee for mineral finding
Carter, William D.
1971-01-01
The EROS Program will continue investigations to select or develop optimum, economical airborne and space systems that will expand man's ability to observe and profit from natural resources. It is to be hoped that several of these systems will eventually prove useful supplements to current and developing mineral exploration technology.
NASA Astrophysics Data System (ADS)
Biswas, G.; Kumari, M.; Adhikari, K.; Dutta, S.
2017-12-01
Fluoride pollution in groundwater is a major concern in rural areas. The flower petal of Shorea robusta, commonly known as sal tree, is used in the present study both in its native form and Ca-impregnated activated form to eradicate excess fluoride from simulated wastewater. Response surface methodology (RSM) was used for experimental designing and analyzing optimum condition for carbonization vis-à-vis calcium impregnation for preparation of adsorbent. During carbonization, temperature, time and weight ratio of calcium chloride to sal flower petal (SFP) have been considered as input factors and percentage removal of fluoride as response. Optimum condition for carbonization has been obtained as temperature, 500 °C; time, 1 h and weight ratio, 2.5 and the sample prepared has been termed as calcium-impregnated carbonized sal flower petal (CCSFP). Optimum condition as analyzed by one-factor-at-a-time (OFAT) method is initial fluoride concentration, 2.91 mg/L; pH 3 and adsorbent dose, 4 g/L. CCSFP shows maximum removal of 98.5% at this condition. RSM has also been used for finding out optimum condition for defluoridation considering initial concentration, pH and adsorbent dose as input parameters. The optimum condition as analyzed by RSM is: initial concentration, 5 mg/L; pH 3.5 and adsorbent dose, 2 g/L. Kinetic and equilibrium data follow Ho pseudo-second-order kinetic model and Freundlich isotherm model, respectively. Adsorption capacity of CCSFP has been found to be 5.465 mg/g. At optimized condition, CCSFP has been found to remove fluoride (80.4%) efficiently from groundwater collected from Bankura district in West Bengal, a fluoride-contaminated province in India.
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Development and Testing of Mechanism Technology for Space Exploration in Extreme Environments
NASA Technical Reports Server (NTRS)
Tyler, Tony R.; Levanas, Greg; Mojarradi, Mohammad M.; Abel, Phillip B.
2011-01-01
The NASA Jet Propulsion Lab (JPL), Glenn Research Center (GRC), Langley Research Center (LaRC), and Aeroflex, Inc. have partnered to develop and test actuator hardware that will survive the stringent environment of the moon, and which can also be leveraged for other challenging space exploration missions. Prototype actuators have been built and tested in a unique low temperature test bed with motor interface temperatures as low as 14 degrees Kelvin. Several years of work have resulted in specialized electro-mechanical hardware to survive extreme space exploration environments, a test program that verifies and finds limitations of the designs at extreme temperatures, and a growing knowledge base that can be leveraged by future space exploration missions.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Optimum aim point biasing in case of a planetary quarantine constraint.
NASA Technical Reports Server (NTRS)
Gedeon, G. S.; Dvornychenko, V. N.
1972-01-01
It is assumed that the probability of impact for each maneuver is the same, and that the aspects of orbit determination and execution errors of each maneuver affect only the targeting. An approximation of the equal probability of impact contour is derived. It is assumed that the quarantine constraint is satisfied if the aim point is not inside the impact contour. A method is devised to find on each contour the optimum aim point which minimizes the so-called bias velocity which is required to bring back the spacecraft from the biased aim point to the originally desired aim point. The method is an improvement over the approach presented by Light (1965), and Craven and Wolfson (1967).
NASA Technical Reports Server (NTRS)
Chapman, P. K.; Bugos, B. J.; Csigi, K. I.; Glaser, P. E.; Schimke, G. R.; Thomas, R. G.
1979-01-01
The feasibility was evaluated of finding potential sites for Solar Power Satellite (SPS) receiving antennas (rectennas) in the continental United States, in sufficient numbers to permit the SPS to make a major contribution to U.S. generating facilities, and to give statistical validity to an assessment of the characteristics of such sites and their implications for the design of the SPS system. It is found that the cost-optimum power output of the SPS does not depend on the particular value assigned to the cost per unit area of a rectenna and its site, as long as it is independent of rectenna area. Many characteristics of the sites chosen affect the optimum design of the rectenna itself.
OPDOT: A computer program for the optimum preliminary design of a transport airplane
NASA Technical Reports Server (NTRS)
Sliwa, S. M.; Arbuckle, P. D.
1980-01-01
A description of a computer program, OPDOT, for the optimal preliminary design of transport aircraft is given. OPDOT utilizes constrained parameter optimization to minimize a performance index (e.g., direct operating cost per block hour) while satisfying operating constraints. The approach in OPDOT uses geometric descriptors as independent design variables. The independent design variables are systematically iterated to find the optimum design. The technical development of the program is provided and a program listing with sample input and output are utilized to illustrate its use in preliminary design. It is not meant to be a user's guide, but rather a description of a useful design tool developed for studying the application of new technologies to transport airplanes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tosun, Ozgur; Sanlidilek, Umman; Cetin, Huseyin
2007-09-15
Magnetic resonance angiography and digital substraction angiography (DSA) findings in a case with a rare congenital thoracoabdominal aortic hypoplasia and common celiamesenteric trunk variation with occlusion of infrarenal abdominal aorta are described here. To our knowledge, this aortic anomaly has not been previously described in the English literature. DSA is the optimum imaging modality for determination of aortic hypoplasia, associated vascular malformations, collateral vessels, and direction of flow within vessels.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
WE-G-209-00: Identifying Image Artifacts, Their Causes, and How to Fix Them
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
A potential flight evaluation of an upper-surface-blowing/circulation-control-wing concept
NASA Technical Reports Server (NTRS)
Riddle, Dennis W.; Eppel, Joseph C.
1987-01-01
The technology data base for powered lift aircraft design has advanced over the last 15 years. NASA's Quiet Short Haul Research Aircraft (QSRA) has provided a flight verification of upper surface blowing (USB) technology. The A-6 Circulation Control Wing flight demonstration aricraft has provide data for circulation control wing (CCW) technology. Recent small scale wind tunnel model tests and full scale static flow turning test have shown the potential of combining USB with CCW technology. A flight research program is deemed necessary to fully explore the performance and control aspects of CCW jet substitution for the mechanical USB Coanda flap. The required hardware design would also address questions about the development of flight weight ducts and CCW jets and the engine bleed-air capabilities vs requirements. NASA's QSRA would be an optimum flight research vehicle for modification to the USB/CCW configuration. The existing QSRA data base, the design simplicity of the QSRA wing trailing edge controls, availability of engine bleed-air, and the low risk, low cost potential of the suggested program is discussed.
Sequential injection system with multi-parameter analysis capability for water quality measurement.
Kaewwonglom, Natcha; Jakmunee, Jaroon
2015-11-01
A simple sequential injection (SI) system with capability to determine multi-parameter has been developed for the determination of iron, manganese, phosphate and ammonium. A simple and compact colorimeter was fabricated in the laboratory to be employed as a detector. The system was optimized for suitable conditions for determining each parameter by changing software program and without reconfiguration of the hardware. Under the optimum conditions, the methods showed linear ranges of 0.2-10 mg L(-1) for iron and manganese determinations, and 0.3-5.0 mg L(-1) for phosphate and ammonium determinations, with correlation coefficients of 0.9998, 0.9973, 0.9987 and 0.9983, respectively. The system provided detection limits of 0.01, 0.14, 0.004 and 0.02 mg L(-1) for iron, manganese, phosphate and ammonium, respectively. The proposed system has good precision, low chemical consumption and high throughput. It was applied for monitoring water quality of Ping river in Chiang Mai, Thailand. Recoveries of the analysis were obtained in the range of 82-119%. Copyright © 2015 Elsevier B.V. All rights reserved.
Sustainability of health information systems in developing countries: the case of Fiji.
Soar, Jeffrey; Gow, Jeff; Caniogo, Vili
This paper examines the future sustainability of the Fijian Ministry of Health's (MoH) information and communication technology (ITC) system for patient management (PATIS). PATIS was developed with AusAID funding and, as the owner of the system, AusAID has no commercial competence or interest in further development of the system. Thus, the question that arises is: should Fiji adopt a commercially available patient administration system or retain the existing PATIS? In-depth consultations with senior executives and line managers of units that were major users of PATIS were undertaken. Semi-structured interviews and focus group discussion approaches were utilised. The consensus or majority views of the users were that the existing PATIS performed more than adequately. The future sustainability of the system is threatened by the lack of investment in resources (e.g. hardware maintenance and human resources) required to keep the system operating at its optimum. It was found that PATIS provides Fiji with a satisfactory patient administration system. The identified problems with the system are not related to the application per se but rather to an under-investment in resources for its utilisation.
Design, fabrication and acceptance testing of a zero gravity whole body shower, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The effort to design whole body shower for the space station prototype is reported. Clothes and dish washer/dryer concepts were formulated with consideration given to integrating such a system with the overall shower design. Water recycling methods to effect vehicle weight savings were investigated and it was concluded that reusing wash and/or rinse water resulted in weight savings which were not sufficient to outweigh the added degree of hardware complexity. The formulation of preliminary and final designs for the shower are described. A detailed comparison of the air drag vs. vacuum pickup method was prepared that indicated the air drag concept results in more severe space station weight penalties; therefore, the preliminary system design was based on utilizing the vacuum pickup method. Tests were performed to determine the optimum methods of storing, heating and sterilizing the cleansing agent utilized in the shower; it was concluded that individual packages of pre-sterilized cleansing agent should be used. Integration features with the space station prototype system were defined and incorporated into the shower design as necessary.
WE-G-209-01: Digital Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schueler, B.
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
NASA Astrophysics Data System (ADS)
Mesbah, M.; Pattey, E.; Jégo, G.; Geng, X.; Tremblay, N.; Didier, A.
2017-12-01
Identifying optimum nitrogen (N) application rate is essential for increasing agricultural production while limiting potential environmental contaminations caused by release of reactive N, especially for high demand N crops such as corn. The central question of N management is then how the optimum N rate is affected by climate variability for given soil. The experimental determination of optimum N rates involve the analyses of variance on the mean value of crop yield response to various N application rates used by factorial plot based experiments for a few years in several regions. This traditional approach has limitations to capture 1) the non-linear response of yield to N application rates due to large incremental N rates (often more than 40 kg N ha-1) and 2) the ecophysiological response of the crop to climate variability because of limited numbers of growing seasons considered. Modeling on the other hand, does not have such limitations and hence we use a crop model and propose a model-based methodology called Finding NEMO (N Ecophysiologically Modelled Optimum) to identify the optimum N rates for variable agro-climatic conditions and given soil properties. The performance of the methodology is illustrated using the STICS crop model adapted for rainfed corn in the Mixedwood Plains ecozone of eastern Canada (42.3oN 83oW-46.8oN 71oW) where more than 90% of Canadian corn is produced. The simulations were performed using small increment of preplant N application rate (10 kg N ha -1), long time series of daily climatic data (48 to 61 years) for 5 regions along the ecozone, and three contrasting soils per region. The results show that N recommendations should be region and soil specific. Soils with lower available water capacity required more N compared to soil with higher available water capacity. When N rates were at their ecophysiologically optimum level, 10 to 17 kg increase in dry yield could be achieved by adding 1 kg N. Expected yield also affected the optimum N rates for the region and soil. For instance, the probability to achieve a yield of 9.2 t ha-1 at 15% grain moisture on a loamy soil varied from 0 to 73% along the ecozone. For this level of expected yield, the recommended N rates ranged from 64 to 155 kg ha-1, which are relatively less than current provincial recommendations in Ontario and Quebec (120-170 kg ha-1).
Direction Finding Using an Antenna with Direction Dependent Impulse Response
NASA Technical Reports Server (NTRS)
Foltz, Heinrich; Kegege, Obadiah
2016-01-01
Wideband antennas may be designed to have an impulse response that is direction dependent, not only in amplitude but also in waveform shape. This property can be used to perform direction finding using a single fixed antenna, without the need for an array or antenna rotation. In this paper direction finding is demonstrated using a simple candelabra-shaped monopole operating in the 1-3 GHz range. The method requires a known transmitted pulse shape and high signal-to-noise ratio, and is not as accurate or robust as conventional methods. However, it can add direction finding capability to a wideband communication system without the addition of any hardware.
NASA Astrophysics Data System (ADS)
Zhang, Zheng
2017-10-01
Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.
Read All about It: Motivate Your Students with These Exercises
ERIC Educational Resources Information Center
Tuttle, Harry Grover
2007-01-01
Educators at elementary, middle, and high school levels will find that integrating digital tools and resources--many commonly used by students in their "out of school" lives--can be a springboard to creativity and new skills. In this article, the author describes how word processors, presentation software and hardware, mind-mapping applications,…
The Use of Computers and Video Games in Brain Damage Therapy.
ERIC Educational Resources Information Center
Lorimer, David
The use of computer assisted therapy (CAT) in the rehabilitation of individuals with brain damage is examined. Hardware considerations are explored, and the variety of software programs available for brain injury rehabilitation is discussed. Structured testing and treatment programs in time measurement, memory, and direction finding are described,…
A Survey on the Use of Microcomputers in Special Libraries.
ERIC Educational Resources Information Center
Krieger, Tillie
1986-01-01
Describes a survey on the use of microcomputers in special libraries. The discussion of the findings includes types of hardware and software in use; applications in public services, technical processes, and administrative tasks; data back-up techniques; training received; evaluation of software; and future plans for microcomputer applications. (1…
Enhancing Teaching and Learning Wi-Fi Networking Using Limited Resources to Undergraduates
ERIC Educational Resources Information Center
Sarkar, Nurul I.
2013-01-01
Motivating students to learn Wi-Fi (wireless fidelity) wireless networking to undergraduate students is often difficult because many students find the subject rather technical and abstract when presented in traditional lecture format. This paper focuses on the teaching and learning aspects of Wi-Fi networking using limited hardware resources. It…
Stories about Struggling Readers and Technology
ERIC Educational Resources Information Center
Anderson, Rebecca; Balajthy, Ernest
2009-01-01
Educators have moved on from older models of technology-based education that focused on the attributes and basic uses of the hardware and software available to them. Now they are finding the necessary creative and innovative ways to harness technology's power for school and community educational improvement. In this column, the authors tell four…
Choosing the Optimum Mix of Duration and Effort in Education.
ERIC Educational Resources Information Center
Oosterbeek, Hessel
1995-01-01
Employs a simple economic model to analyze determinants of Dutch college students' expected study duration and weekly effort. Findings show that the duration/effort ratio is determined by the relative prices of these inputs into the learning process. A higher socioeconomic status increases the duration/effort ratio. Higher ability levels decrease…
Line Lengths and Starch Scores.
ERIC Educational Resources Information Center
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud
2018-03-01
Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.
Jaric, Slobodan; Garcia Ramos, Amador
2018-05-01
Loturco and co-workers (2017) recently published data in the Journal of Sports Sciences to present the optimum loading magnitudes regarding the maximization of the "mean propulsive power" of the leg and arm muscles. Among the most important findings were that (1) the recorded power in the squat and squat jump exercises was markedly low, (2) the optimum external load that maximized the power in the same exercises was close to 100% of body weight, while (3) the ballistic bench press throw revealed smaller power than the regular bench press typically performed with relatively low level of muscle activation towards the end of the propulsive lifting phase. The findings are either counter-intuitive, or contradict the literature findings, or both, and we believe that they originate from apparent methodological flaws. The first one is neglecting the force acting against the body segments moved together with the external load that is particularly high in squat exercises. The second one is an erroneous calculation of the propulsive phase that included a part of the bar's flight time. Both of these methodological flaws are frequent in the literature and could be associated with the improper use and calculation of variables when utilizing linear position transducers.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
NASA Astrophysics Data System (ADS)
Mansour, F. A.; Nizam, M.; Anwar, M.
2017-02-01
This research aims to predict the optimum surface orientation angles in solar panel installation to achieve maximum solar radiation. Incident solar radiation is calculated using koronakis mathematical model. Particle Swarm Optimization (PSO) is used as computational method to find optimum angle orientation for solar panel installation in order to get maximum solar radiation. A series of simulation has been carried out to calculate solar radiation based on monthly, seasonally, semi-yearly and yearly period. South-facing was calculated also as comparison of proposed method. South-facing considers azimuth of 0°. Proposed method attains higher incident predictions than South-facing that recorded 2511.03 kWh/m2for monthly. It were about 2486.49 kWh/m2, 2482.13 kWh/m2and 2367.68 kWh/m2 for seasonally, semi-yearly and yearly. South-facing predicted approximately 2496.89 kWh/m2, 2472.40 kWh/m2, 2468.96 kWh/m2, 2356.09 kWh/m2for monthly, seasonally, semi-yearly and yearly periods respectively. Semi-yearly is the best choice because it needs twice adjustments of solar panel in a year. Yet it considers inefficient to adjust solar panel position in every season or monthly with no significant solar radiation increase than semi-yearly and solar tracking device still considers costly in solar energy system. PSO was able to predict accurately with simple concept, easy and computationally efficient. It has been proven by finding the best fitness faster.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.
Huang, Shuqiang; Tao, Ming
2017-01-22
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131
Sharifi Dehsari, Hamed; Harris, Richard Anthony; Ribeiro, Anielen Halda; Tremel, Wolfgang; Asadi, Kamal
2018-06-05
Despite the great progress in the synthesis of iron oxide nanoparticles (NPs) using a thermal decomposition method, the production of NPs with low polydispersity index is still challenging. In a thermal decomposition synthesis, oleic acid (OAC) and oleylamine (OAM) are used as surfactants. The surfactants bind to the growth species, thereby controlling the reaction kinetics and hence playing a critical role in the final size and size distribution of the NPs. Finding an optimum molar ratio between the surfactants oleic OAC/OAM is therefore crucial. A systematic experimental and theoretical study, however, on the role of the surfactant ratio is still missing. Here, we present a detailed experimental study on the role of the surfactant ratio in size distribution. We found an optimum OAC/OAM ratio of 3 at which the synthesis yielded truly monodisperse (polydispersity less than 7%) iron oxide NPs without employing any post synthesis size-selective procedures. We performed molecular dynamics simulations and showed that the binding energy of oleate to the NP is maximized at an OAC/OAM ratio of 3. The optimum OAC/OAM ratio of 3 is allowed for the control of the NP size with nanometer precision by simply changing the reaction heating rate. The optimum OAC/OAM ratio has no influence on the crystallinity and the superparamagnetic behavior of the Fe 3 O 4 NPs and therefore can be adopted for the scaled-up production of size-controlled monodisperse Fe 3 O 4 NPs.
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
The Impact of Flight Hardware Scavenging on Space Logistics
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.
2011-01-01
For a given fixed launch vehicle capacity the logistics payload delivered to the moon may be only roughly 20 percent of the payload delivered to the International Space Station (ISS). This is compounded by the much lower flight frequency to the moon and thus low availability of spares for maintenance. This implies that lunar hardware is much more scarce and more costly per kilogram than ISS and thus there is much more incentive to preserve hardware. The Constellation Lunar Surface System (LSS) program is considering ways of utilizing hardware scavenged from vehicles including the Altair lunar lander. In general, the hardware will have only had a matter of hours of operation yet there may be years of operational life remaining. By scavenging this hardware the program, in effect, is treating vehicle hardware as part of the payload. Flight hardware may provide logistics spares for system maintenance and reduce the overall logistics footprint. This hardware has a wide array of potential applications including expanding the power infrastructure, and exploiting in-situ resources. Scavenging can also be seen as a way of recovering the value of, literally, billions of dollars worth of hardware that would normally be discarded. Scavenging flight hardware adds operational complexity and steps must be taken to augment the crew s capability with robotics, capabilities embedded in flight hardware itself, and external processes. New embedded technologies are needed to make hardware more serviceable and scavengable. Process technologies are needed to extract hardware, evaluate hardware, reconfigure or repair hardware, and reintegrate it into new applications. This paper also illustrates how scavenging can be used to drive down the cost of the overall program by exploiting the intrinsic value of otherwise discarded flight hardware.
NASA Technical Reports Server (NTRS)
Frady, Greg; Smaolloey, Kurt; LaVerde, Bruce; Bishop, Jim
2004-01-01
The paper will discuss practical and analytical findings of a test program conducted to assist engineers in determining which analytical strain fields are most appropriate to describe the crack initiating and crack propagating stresses in thin walled cylindrical hardware that serves as part of the Space Shuttle Main Engine's fuel system. In service the hardware is excited by fluctuating dynamic pressures in a cryogenic fuel that arise from turbulent flow/pump cavitation. A bench test using a simplified system was conducted using acoustic energy in air to excite the test articles. Strain measurements were used to reveal response characteristics of two Flowliner test articles that are assembled as a pair when installed in the engine feed system.
Navier-Stokes flow field analysis of compressible flow in a high pressure safety relief valve
NASA Technical Reports Server (NTRS)
Vu, Bruce; Wang, Ten-See; Shih, Ming-Hsin; Soni, Bharat
1993-01-01
The objective of this study is to investigate the complex three-dimensional flowfield of an oxygen safety pressure relieve valve during an incident, with a computational fluid dynamic (CFD) analysis. Specifically, the analysis will provide a flow pattern that would lead to the expansion of the eventual erosion pattern of the hardware, so as to combine it with other findings to piece together a most likely scenario for the investigation. The CFD model is a pressure based solver. An adaptive upwind difference scheme is employed for the spatial discretization, and a predictor, multiple corrector method is used for the velocity-pressure coupling. The computational result indicated vortices formation near the opening of the valve which matched the erosion pattern of the damaged hardware.
Preliminary Findings from the SHERE ISS Experiment
NASA Technical Reports Server (NTRS)
Hall, Nancy R.; McKinley, Gareth H.; Erni, Philipp; Soulages, Johannes; Magee, Kevin S.
2009-01-01
The Shear History Extensional Rheology Experiment (SHERE) is an International Space Station (ISS) glovebox experiment designed to study the effect of preshear on the transient evolution of the microstructure and viscoelastic tensile stresses for monodisperse dilute polymer solutions. The SHERE experiment hardware was launched on Shuttle Mission STS-120 (ISS Flight 10A) on October 22, 2007, and 20 fluid samples were launched on Shuttle Mission STS-123 (ISS Flight 10/A) on March 11, 2008. Astronaut Gregory Chamitoff performed experiments during Increment 17 on the ISS between June and September 2008. A summary of the ten year history of the hardware development, the experiment's science objectives, and Increment 17's flight operations are discussed in the paper. A brief summary of the preliminary science results is also discussed.
Opportunities and choice in a new vector era
NASA Astrophysics Data System (ADS)
Nowak, A.
2014-06-01
This work discusses the significant changes in computing landscape related to the progression of Moore's Law, and the implications on scientific computing. Particular attention is devoted to the High Energy Physics domain (HEP), which has always made good use of threading, but levels of parallelism closer to the hardware were often left underutilized. Findings of the CERN openlab Platform Competence Center are reported in the context of expanding "performance dimensions", and especially the resurgence of vectors. These suggest that data oriented designs are feasible in HEP and have considerable potential for performance improvements on multiple levels, but will rarely trump algorithmic enhancements. Finally, an analysis of upcoming hardware and software technologies identifies heterogeneity as a major challenge for software, which will require more emphasis on scalable, efficient design.
QCE: A Simulator for Quantum Computer Hardware
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; de Raedt, Hans
2003-09-01
The Quantum Computer Emulator (QCE) described in this paper consists of a simulator of a generic, general purpose quantum computer and a graphical user interface. The latter is used to control the simulator, to define the hardware of the quantum computer and to debug and execute quantum algorithms. QCE runs in a Windows 98/NT/2000/ME/XP environment. It can be used to validate designs of physically realizable quantum processors and as an interactive educational tool to learn about quantum computers and quantum algorithms. A detailed exposition is given of the implementation of the CNOT and the Toffoli gate, the quantum Fourier transform, Grover's database search algorithm, an order finding algorithm, Shor's algorithm, a three-input adder and a number partitioning algorithm. We also review the results of simulations of an NMR-like quantum computer.
Statistical Analysis of Complexity Generators for Cost Estimation
NASA Technical Reports Server (NTRS)
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
Safe to Fly: Certifying COTS Hardware for Spaceflight
NASA Technical Reports Server (NTRS)
Fichuk, Jessica L.
2011-01-01
Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.
EHW Approach to Temperature Compensation of Electronics
NASA Technical Reports Server (NTRS)
Stoica, Adrian
2004-01-01
Efforts are under way to apply the concept of evolvable hardware (EHW) to compensate for variations, with temperature, in the operational characteristics of electronic circuits. To maintain the required functionality of a given circuit at a temperature above or below the nominal operating temperature for which the circuit was originally designed, a new circuit would be evolved; moreover, to obtain the required functionality over a very wide temperature range, there would be evolved a number of circuits, each of which would satisfy the performance requirements over a small part of the total temperature range. The basic concepts and some specific implementations of EHW were described in a number of previous NASA Tech Briefs articles, namely, "Reconfigurable Arrays of Transistors for Evolvable Hardware" (NPO-20078), Vol. 25, No. 2 (February 2001), page 36; Evolutionary Automated Synthesis of Electronic Circuits (NPO- 20535), Vol. 26, No. 7 (July 2002), page 37; "Designing Reconfigurable Antennas Through Hardware Evolution" (NPO-20666), Vol. 26, No. 7 (July 2002), page 38; "Morphing in Evolutionary Synthesis of Electronic Circuits" (NPO-20837), Vol. 26, No. 8 (August 2002), page 31; "Mixtrinsic Evolutionary Synthesis of Electronic Circuits" (NPO-20773) Vol. 26, No. 8 (August 2002), page 32; and "Synthesis of Fuzzy-Logic Circuits in Evolvable Hardware" (NPO-21095) Vol. 26, No. 11 (November 2002), page 38. To recapitulate from the cited prior articles: EHW is characterized as evolutionary in a quasi-genetic sense. The essence of EHW is to construct and test a sequence of populations of circuits that function as incrementally better solutions of a given design problem through the selective, repetitive connection and/or disconnection of capacitors, transistors, amplifiers, inverters, and/or other circuit building blocks. The connection and disconnection can be effected by use of field-programmable transistor arrays (FPTAs). The evolution is guided by a search-andoptimization algorithm (in particular, a genetic algorithm) that operates in the space of possible circuits to find a circuit that exhibits an acceptably close approximation of the desired functionality. The evolved circuits can be tested by mathematical modeling (that is, computational simulation) only, tested in real hardware, or tested in combinations of computational simulation and real hardware.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
NASA Astrophysics Data System (ADS)
Yang, M.; Geng, X.; Wang, Y. L.; Li, D. X.
2017-05-01
Three orthogonal tests are separately designed for each hydrometallurgical gold leaching process to finding the optimum reaction conditions of melting gold and palladium in each process. Under the optimum condition, the determination amount of gold and palladium in aqua regia—hydrofluoric acid, Sodium thiosulfate, and potassium iodide reaches 2.87g/kg and 8.34 g/kg, 2.39g/kg and 8.12 g/kg, 2.51g/kg and 7.84g/kg. From the result, the content of gold and palladium using the leaching process of combining Aqua regia, hydrofluoric acid and hydrogen peroxide is relatively higher than the other processes. In addition, the experiment procedure of aqua regia digestion operates easily, using less equipment, and its period is short.
Simplified analysis and optimization of space base and space shuttle heat rejection systems
NASA Technical Reports Server (NTRS)
Wulff, W.
1972-01-01
A simplified radiator system analysis was performed to predict steady state radiator system performance. The system performance was found to be describable in terms of five non-dimensional system parameters. The governing differential equations are integrated numerically to yield the enthalpy rejection for the coolant fluid. The simplified analysis was extended to produce the derivatives of the coolant exit temperature with respect to the governing system parameters. A procedure was developed to find the optimum set of system parameters which yields the lowest possible coolant exit temperature for either a given projected area or a given total mass. The process can be inverted to yield either the minimum area or the minimum mass, together with the optimum geometry, for a specified heat rejection rate.
Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bolosky, William Joseph
1993-01-01
Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.
2013-01-01
The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276
Salisbury, C M; Gillespie, R B; Tan, H Z; Barbagli, F; Salisbury, J K
2011-01-01
In this paper, we extend the concept of the contrast sensitivity function - used to evaluate video projectors - to the evaluation of haptic devices. We propose using human observers to determine if vibrations rendered using a given haptic device are accompanied by artifacts detectable to humans. This determination produces a performance measure that carries particular relevance to applications involving texture rendering. For cases in which a device produces detectable artifacts, we have developed a protocol that localizes deficiencies in device design and/or hardware implementation. In this paper, we present results from human vibration detection experiments carried out using three commercial haptic devices and one high performance voice coil motor. We found that all three commercial devices produced perceptible artifacts when rendering vibrations near human detection thresholds. Our protocol allowed us to pinpoint the deficiencies, however, and we were able to show that minor modifications to the haptic hardware were sufficient to make these devices well suited for rendering vibrations, and by extension, the vibratory components of textures. We generalize our findings to provide quantitative design guidelines that ensure the ability of haptic devices to proficiently render the vibratory components of textures.
Adaptive Neuromorphic Circuit for Stereoscopic Disparity Using Ocular Dominance Map
Sharma, Sheena; Gupta, Priti; Markan, C. M.
2016-01-01
Stereopsis or depth perception is a critical aspect of information processing in the brain and is computed from the positional shift or disparity between the images seen by the two eyes. Various algorithms and their hardware implementation that compute disparity in real time have been proposed; however, most of them compute disparity through complex mathematical calculations that are difficult to realize in hardware and are biologically unrealistic. The brain presumably uses simpler methods to extract depth information from the environment and hence newer methodologies that could perform stereopsis with brain like elegance need to be explored. This paper proposes an innovative aVLSI design that leverages the columnar organization of ocular dominance in the brain and uses time-staggered Winner Take All (ts-WTA) to adaptively create disparity tuned cells. Physiological findings support the presence of disparity cells in the visual cortex and show that these cells surface as a result of binocular stimulation received after birth. Therefore, creating in hardware cells that can learn different disparities with experience not only is novel but also is biologically more realistic. These disparity cells, when allowed to interact diffusively on a larger scale, can be used to adaptively create stable topological disparity maps in silicon. PMID:27243029
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Replacement Technologies for Precision Cleaning of Aerospace Hardware for Propellant Service
NASA Technical Reports Server (NTRS)
Beeson, Harold; Kirsch, Mike; Hornung, Steven; Biesinger, Paul
1997-01-01
The NASA White Sands Test Facility (WSTF) is developing cleaning and verification processes to replace currently used chlorofluorocarbon-l13- (CFC-113-) based processes. The processes being evaluated include both aqueous- and solvent-based techniques. Replacement technologies are being investigated for aerospace hardware and for gauges and instrumentation. This paper includes the findings of investigations of aqueous cleaning and verification of aerospace hardware using known contaminants, such as hydraulic fluid and commonly used oils. The results correlate nonvolatile residue with CFC 113. The studies also include enhancements to aqueous sampling for organic and particulate contamination. Although aqueous alternatives have been identified for several processes, a need still exists for nonaqueous solvent cleaning, such as the cleaning and cleanliness verification of gauges used for oxygen service. The cleaning effectiveness of tetrachloroethylene (PCE), trichloroethylene (TCE), ethanol, hydrochlorofluorocarbon 225 (HCFC 225), HCFC 141b, HFE 7100(R), and Vertrel MCA(R) was evaluated using aerospace gauges and precision instruments and then compared to the cleaning effectiveness of CFC 113. Solvents considered for use in oxygen systems were also tested for oxygen compatibility using high-pressure oxygen autogenous ignition and liquid oxygen mechanical impact testing.
Demonstration of a small programmable quantum computer with atomic qubits.
Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C
2016-08-04
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Demonstration of a small programmable quantum computer with atomic qubits
NASA Astrophysics Data System (ADS)
Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.
2016-08-01
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Carbon sequestration, optimum forest rotation and their environmental impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kula, Erhun, E-mail: erhun.kula@bahcesehir.edu.tr; Gunalay, Yavuz, E-mail: yavuz.gunalay@bahcesehir.edu.tr
2012-11-15
Due to their large biomass forests assume an important role in the global carbon cycle by moderating the greenhouse effect of atmospheric pollution. The Kyoto Protocol recognises this contribution by allocating carbon credits to countries which are able to create new forest areas. Sequestrated carbon provides an environmental benefit thus must be taken into account in cost-benefit analysis of afforestation projects. Furthermore, like timber output carbon credits are now tradable assets in the carbon exchange. By using British data, this paper looks at the issue of identifying optimum felling age by considering carbon sequestration benefits simultaneously with timber yields. Themore » results of this analysis show that the inclusion of carbon benefits prolongs the optimum cutting age by requiring trees to stand longer in order to soak up more CO{sub 2}. Consequently this finding must be considered in any carbon accounting calculations. - Highlights: Black-Right-Pointing-Pointer Carbon sequestration in forestry is an environmental benefit. Black-Right-Pointing-Pointer It moderates the problem of global warming. Black-Right-Pointing-Pointer It prolongs the gestation period in harvesting. Black-Right-Pointing-Pointer This paper uses British data in less favoured districts for growing Sitka spruce species.« less
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
An Orbit And Dispersion Correction Scheme for the PEP II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Donald, M.; Shoaee, H.
2011-09-01
To achieve optimum luminosity in a storage ring it is vital to control the residual vertical dispersion. In the original PEP storage ring, a scheme to control the residual dispersion function was implemented using the ring orbit as the controlling element. The 'best' orbit not necessarily giving the lowest vertical dispersion. A similar scheme has been implemented in both the on-line control code and in the simulation code LEGO. The method involves finding the response matrices (sensitivity of orbit/dispersion at each Beam-Position-Monitor (BPM) to each orbit corrector) and solving in a least squares sense for minimum orbit, dispersion function ormore » both. The optimum solution is usually a subset of the full least squares solution. A scheme of simultaneously correcting the orbits and dispersion has been implemented in the simulation code and on-line control system for PEP-II. The scheme is based on the eigenvector decomposition method. An important ingredient of the scheme is to choose the optimum eigenvectors that minimize the orbit, dispersion and corrector strength. Simulations indicate this to be a very effective way to control the vertical residual dispersion.« less
A survey of the state of the art and focused research in range systems, task 2
NASA Technical Reports Server (NTRS)
Yao, K.
1986-01-01
Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
Looking Forward: Comment on Morgante, Zolfaghari, and Johnson
ERIC Educational Resources Information Center
Creel, Sarah C.
2012-01-01
Morgante et al. (in press) find inconsistencies in the time reporting of a Tobii T60XL eye tracker. Their study raises important questions about the use of the Tobii T-series in particular, and various software and hardware in general, in different infant eye tracking paradigms. It leaves open the question of the source of the inconsistencies.…
The MGS Avionics System Architecture: Exploring the Limits of Inheritance
NASA Technical Reports Server (NTRS)
Bunker, R.
1994-01-01
Mars Global Surveyor (MGS) avionics system architecture comprises much of the electronics on board the spacecraft: electrical power, attitude and articulation control, command and data handling, telecommunications, and flight software. Schedule and cost constraints dictated a mix of new and inherited designs, especially hardware upgrades based on findings of the Mars Observer failure review boards.
2017-04-19
A display at the Kennedy Space Center Visitor Complex describes the purpose of Swarmies. Computer scientists are developing these robots focusing not so much on the hardware, but the software. In the spaceport's annual Swarmathon, students from 12 colleges and universities across the nation were invited to develop software code to operate Swarmies to help find resources when astronauts explore distant planets, such as Mars.
On Compact Book Storage in Libraries.
ERIC Educational Resources Information Center
Ravindran, Arunachalam
The optimal storage of books by size in libraries is considered in this paper. It is shown that for a given collection of books of various sizes, the optimum number of shelf heights to use can be determined by finding the shortest path in an equivalent network. Applications of this model to inventory control, assortment and packaging problems are…
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
OPTIGRAMI: Optimum lumber grade mix program for hardwood dimension parts
David G. Martens; Jr., Robert L. Nevel; Jr. Nevel
1985-01-01
With rapidly increasing lumber prices and shortages of some grades and species, the furniture industry must find ways to use its hardwood lumber resource more efficiently. A computer program called OPTIGRAMI is designed to help managers determine the best lumber to use in producing furniture parts. OPTIGRAMI determines the least-cost grade mix of lumber required to...
Environmental Conditions for Space Flight Hardware: A Survey
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Lee, Brandon
2005-01-01
Interest in generalization of the physical environment experienced by NASA hardware from the natural Earth environment (on the launch pad), man-made environment on Earth (storage acceptance an d qualification testing), the launch environment, and the space environment, is ed to find commonality among our hardware in an effort to reduce cost and complexity. NASA is entering a period of increase in its number of planetary missions and it is important to understand how our qualification requirements will evolve with and track these new environments. Environmental conditions are described for NASA projects in several ways for the different periods of the mission life cycle. At the beginning, the mission manager defines survivability requirements based on the mission length, orbit, launch date, launch vehicle, and other factors . such as the use of reactor engines. Margins are then applied to these values (temperature extremes, vibration extremes, radiation tolerances, etc,) and a new set of conditions is generalized for design requirements. Mission assurance documents will then assign an additional margin for reliability, and a third set of values is provided for during testing. A fourth set of environmental condition values may evolve intermittently from heritage hardware that has been tested to a level beyond the actual mission requirement. These various sets of environment figures can make it quite confusing and difficult to capture common hardware environmental requirements. Environmental requirement information can be found in a wide variety of places. The most obvious is with the individual projects. We can easily get answers to questions about temperature extremes being used and radiation tolerance goals, but it is more difficult to map the answers to the process that created these requirements: for design, for qualification, and for actual environment with no margin applied. Not everyone assigned to a NASA project may have that kind of insight, as many have only the environmental requirement numbers needed to do their jobs but do not necessarily have a programmatic-level understanding of how all of the environmental requirements fit together.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.
Laser Peening Effects on Friction Stir Welding
NASA Technical Reports Server (NTRS)
Hatameleh, Omar
2009-01-01
The laser peening process can result in considerable improvement to crack initiation, propagation, and mechanical properties in FSW which equates to longer hardware service life Processed hardware safety is improved by producing higher failure tolerant hardware, and reducing risk. Lowering hardware maintenance cost produces longer hardware service life, and lower hardware down time. Application of this proposed technology will result in substantial benefits and savings throughout the life of the treated components
The Evolution of Exercise Hardware on ISS: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.
2017-01-01
During 16 years in low-Earth orbit, the suite of exercise hardware aboard the International Space Station (ISS) has matured significantly. Today, the countermeasure system supports an array of physical-training protocols and serves as an extensive research platform. Future hardware designs are required to have smaller operational envelopes and must also mitigate known physiologic issues observed in long-duration spaceflight. Taking lessons learned from the long history of space exercise will be important to successful development and implementation of future, compact exercise hardware. The evolution of exercise hardware as deployed on the ISS has implications for future exercise hardware and operations. Key lessons learned from the early days of ISS have helped to: 1. Enhance hardware performance (increased speed and loads). 2. Mature software interfaces. 3. Compare inflight exercise workloads to pre-, in-, and post-flight musculoskeletal and aerobic conditions. 4. Improve exercise comfort. 5. Develop complimentary hardware for research and operations. Current ISS exercise hardware includes both custom and commercial-off-the-shelf (COTS) hardware. Benefits and challenges to this approach have prepared engineering teams to take a hybrid approach when designing and implementing future exercise hardware. Significant effort has gone into consideration of hardware instrumentation and wearable devices that provide important data to monitor crew health and performance.
Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources
NASA Astrophysics Data System (ADS)
Davoodi, M.; Mesgari, M. S.
2015-12-01
Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.
Analog Approach to Constraint Satisfaction Enabled by Spin Orbit Torque Magnetic Tunnel Junctions.
Wijesinghe, Parami; Liyanagedera, Chamika; Roy, Kaushik
2018-05-02
Boolean satisfiability (k-SAT) is an NP-complete (k ≥ 3) problem that constitute one of the hardest classes of constraint satisfaction problems. In this work, we provide a proof of concept hardware based analog k-SAT solver, that is built using Magnetic Tunnel Junctions (MTJs). The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog satisfiability (SAT) solver. In the presence of thermal noise, the MTJ based system can successfully solve Boolean satisfiability problems. Most importantly, our results exhibit that, the proposed MTJ based hardware SAT solver is capable of finding a solution to a significant fraction (at least 85%) of hard 3-SAT problems, within a time that has a polynomial relationship with the number of variables(<50).
FPGA based data processing in the ALICE High Level Trigger in LHC Run 2
NASA Astrophysics Data System (ADS)
Engel, Heiko; Alt, Torsten; Kebschull, Udo;
2017-10-01
The ALICE High Level Trigger (HLT) is a computing cluster dedicated to the online compression, reconstruction and calibration of experimental data. The HLT receives detector data via serial optical links into FPGA based readout boards that process the data on a per-link level already inside the FPGA and provide it to the host machines connected with a data transport framework. FPGA based data pre-processing is enabled for the biggest detector of ALICE, the Time Projection Chamber (TPC), with a hardware cluster finding algorithm. This algorithm was ported to the Common Read-Out Receiver Card (C-RORC) as used in the HLT for RUN 2. It was improved to handle double the input bandwidth and adjusted to the upgraded TPC Readout Control Unit (RCU2). A flexible firmware implementation in the HLT handles both the old and the new TPC data format and link rates transparently. Extended protocol and data error detection, error handling and the enhanced RCU2 data ordering scheme provide an improved physics performance of the cluster finder. The performance of the cluster finder was verified against large sets of reference data both in terms of throughput and algorithmic correctness. Comparisons with a software reference implementation confirm significant savings on CPU processing power using the hardware implementation. The C-RORC hardware with the cluster finder for RCU1 data is in use in the HLT since the start of RUN 2. The extended hardware cluster finder implementation for the RCU2 with doubled throughput is active since the upgrade of the TPC readout electronics in early 2016.
A neuromorphic network for generic multivariate data classification
Schmuker, Michael; Pfeil, Thomas; Nawrot, Martin Paul
2014-01-01
Computational neuroscience has uncovered a number of computational principles used by nervous systems. At the same time, neuromorphic hardware has matured to a state where fast silicon implementations of complex neural networks have become feasible. En route to future technical applications of neuromorphic computing the current challenge lies in the identification and implementation of functional brain algorithms. Taking inspiration from the olfactory system of insects, we constructed a spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. In this model, real-valued multivariate data are converted into spike trains using “virtual receptors” (VRs). Their output is processed by lateral inhibition and drives a winner-take-all circuit that supports supervised learning. VRs are conveniently implemented in software, whereas the lateral inhibition and classification stages run on accelerated neuromorphic hardware. When trained and tested on real-world datasets, we find that the classification performance is on par with a naïve Bayes classifier. An analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100 ms of biological time, matching the time-to-decision reported for the insect nervous system. Through leveraging a population code, the network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems. PMID:24469794
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
Robust electromagnetic absorption by graphene/polymer heterostructures
NASA Astrophysics Data System (ADS)
Lobet, Michaël; Reckinger, Nicolas; Henrard, Luc; Lambin, Philippe
2015-07-01
Polymer/graphene heterostructures present good shielding efficiency against GHz electromagnetic perturbations. Theory and experiments demonstrate that there is an optimum number of graphene planes, separated by thin polymer spacers, leading to maximum absorption for millimeter waves Batrakov et al (2014 Sci. Rep. 4 7191). Here, electrodynamics of ideal polymer/graphene multilayered material is first approached with a well-adapted continued-fraction formalism. In a second stage, rigorous coupled wave analysis is used to account for the presence of defects in graphene that are typical of samples produced by chemical vapor deposition, namely microscopic holes, microscopic dots (embryos of a second layer) and grain boundaries. It is shown that the optimum absorbance of graphene/polymer multilayers does not weaken to the first order in defect concentration. This finding testifies to the robustness of the shielding efficiency of the proposed absorption device.
A Space-Saving Approximation Algorithm for Grammar-Based Compression
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroshi; Maruyama, Shirou; Kida, Takuya; Shimozono, Shinichi
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and g=\\\\Omega(\\\\log n) and g=O\\\\left(\\\\frac{n}{log_kn}\\\\right) for strings from k-letter alphabet[12].
Column with CNT/magnesium oxide composite for lead(II) removal from water.
Saleh, Tawfik A; Gupta, Vinod K
2012-05-01
In this study, manganese dioxide-coated multiwall carbon nanotube (MnO(2)/CNT) nanocomposite has been successfully synthesized. The as-produced nanocomposite was characterized by different characteristic tools, such as X-ray diffraction, SEM, and FTIR. The MnO(2)/CNT nanocomposite was utilized as a fixed bed in a column system for removal of lead(II) from water. The experimental conditions were investigated and optimized. The pH range between 3 and 7 was studied; the optimum removal was found when the pH was equal to 6 and 7. The thickness of MnO(2)/CNT nanocomposite compact layer was also changed to find the optimum parameter for higher removal. It was observed that the slower the flow rates of the feed solution the higher the removal because of larger contact time.
Characteristics and Energy Use of Volume Servers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchs, H.; Shehabi, A.; Ganeshalingam, M.
Servers’ field energy use remains poorly understood, given heterogeneous computing loads, configurable hardware and software, and operation over a wide range of management practices. This paper explores various characteristics of 1- and 2-socket volume servers that affect energy consumption, and quantifies the difference in power demand between higher-performing SPEC and ENERGY STAR servers and our best understanding of a typical server operating today. We first establish general characteristics of the U.S. installed base of volume servers from existing IDC data and the literature, before presenting information on server hardware configurations from data collection events at a major online retail website.more » We then compare cumulative distribution functions of server idle power across three separate datasets and explain the differences between them via examination of the hardware characteristics to which power draw is most sensitive. We find that idle server power demand is significantly higher than ENERGY STAR benchmarks and the industry-released energy use documented in SPEC, and that SPEC server configurations—and likely the associated power-scaling trends—are atypical of volume servers. Next, we examine recent trends in server power draw among high-performing servers across their full load range to consider how representative these trends are of all volume servers before inputting weighted average idle power load values into a recently published model of national server energy use. Finally, we present results from two surveys of IT managers (n=216) and IT vendors (n=178) that illustrate the prevalence of more-efficient equipment and operational practices in server rooms and closets; these findings highlight opportunities to improve the energy efficiency of the U.S. server stock.« less
Stability and Response of Polygenic Traits to Stabilizing Selection and Mutation
de Vladar, Harold P.; Barton, Nick
2014-01-01
When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation–selection balance is 2μ/S per locus, where μ is the mutation rate and S the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric and complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by 2μ/S, and on the total mutational effects of alleles that are at intermediate frequency. The interplay between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of 2μ/S remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find that if the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by μS. Long-term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a suboptimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states and to the speed of the shift of the optimum trait value. PMID:24709633
A new approach using coagulation rate constant for evaluation of turbidity removal
NASA Astrophysics Data System (ADS)
Al-Sameraiy, Mukheled
2017-06-01
Coagulation-flocculation-sedimentation processes for treating three levels of bentonite synthetic turbid water using date seeds (DS) and alum (A) coagulants were investigated in the previous research work. In the current research, the same experimental results were used to adopt a new approach on a basis of using coagulation rate constant as an investigating parameter to identify optimum doses of these coagulants. Moreover, the performance of these coagulants to meet (WHO) turbidity standard was assessed by introducing a new evaluating criterion in terms of critical coagulation rate constant (kc). Coagulation rate constants (k2) were mathematically calculated in second order form of coagulation process for each coagulant. The maximum (k2) values corresponded to doses, which were obviously to be considered as optimum doses. The proposed criterion to assess the performance of coagulation process of these coagulants was based on the mathematical representation of (WHO) turbidity guidelines in second order form of coagulation process stated that (k2) for each coagulant should be ≥ (kc) for each level of synthetic turbid water. For all tested turbid water, DS coagulant could not satisfy it. While, A coagulant could satisfy it. The results obtained in the present research are exactly in agreement with the previous published results in terms of finding optimum doses for each coagulant and assessing their performances. On the whole, it is recommended considering coagulation rate constant to be a new approach as an indicator for investigating optimum doses and critical coagulation rate constant to be a new evaluating criterion to assess coagulants' performance.
NASA Astrophysics Data System (ADS)
Soomere, Tarmo; Berezovski, Mihhail; Quak, Ewald; Viikmäe, Bert
2011-10-01
We address possibilities of minimising environmental risks using statistical features of current-driven propagation of adverse impacts to the coast. The recently introduced method for finding the optimum locations of potentially dangerous activities (Soomere et al. in Proc Estonian Acad Sci 59:156-165, 2010) is expanded towards accounting for the spatial distributions of probabilities and times for reaching the coast for passively advecting particles released in different sea areas. These distributions are calculated using large sets of Lagrangian trajectories found from Eulerian velocity fields provided by the Rossby Centre Ocean Model with a horizontal resolution of 2 nautical miles for 1987-1991. The test area is the Gulf of Finland in the northeastern Baltic Sea. The potential gain using the optimum fairways from the Baltic Proper to the eastern part of the gulf is an up to 44% decrease in the probability of coastal pollution and a similar increase in the average time for reaching the coast. The optimum fairways are mostly located to the north of the gulf axis (by 2-8 km on average) and meander substantially in some sections. The robustness of this approach is quantified as the typical root mean square deviation (6-16 km) between the optimum fairways specified from different criteria. Drastic variations in the width of the `corridors' for almost optimal fairways (2-30 km for the average width of 15 km) signifies that the sensitivity of the results with respect to small changes in the environmental criteria largely varies in different parts of the gulf.
Nespolo, Roberto F; Figueroa, Julio; Solano-Iguaran, Jaiber J
2017-08-01
A fundamental problem in evolutionary biology is the understanding of the factors that promote or constrain adaptive evolution, and assessing the role of natural selection in this process. Here, comparative phylogenetics, that is, using phylogenetic information and traits to infer evolutionary processes has been a major paradigm . In this study, we discuss Ornstein-Uhlenbeck models (OU) in the context of thermal adaptation in ectotherms. We specifically applied this approach to study amphibians's evolution and energy metabolism. It has been hypothesized that amphibians exploit adaptive zones characterized by low energy expenditure, which generate specific predictions in terms of the patterns of diversification in standard metabolic rate (SMR). We complied whole-animal metabolic rates for 122 species of amphibians, and adjusted several models of diversification. According to the adaptive zone hypothesis, we expected: (1) to find "accelerated evolution" in SMR (i.e., diversification above Brownian Motion expectations, BM), (2) that a model assuming evolutionary optima (i.e., an OU model) fits better than a white-noise model and (3) that a model assuming multiple optima (according to the three amphibians's orders) fits better than a model assuming a single optimum. As predicted, we found that the diversification of SMR occurred most of the time, above BM expectations. Also, we found that a model assuming an optimum explained the data in a better way than a white-noise model. However, we did not find evidence that an OU model with multiple optima fits the data better, suggesting a single optimum in SMR for Anura, Caudata and Gymnophiona. These results show how comparative phylogenetics could be applied for testing adaptive hypotheses regarding history and physiological performance in ectotherms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen
2017-09-01
The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems
Huang, Shuqiang; Tao, Ming
2017-01-01
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735
Morin, B R; Kinzig, A P; Levin, S A; Perrings, C A
2017-09-29
Does society benefit from encouraging or discouraging private infectious disease-risk mitigation? Private individuals routinely mitigate infectious disease risks through the adoption of a range of precautions, from vaccination to changes in their contact with others. Such precautions have epidemiological consequences. Private disease-risk mitigation generally reduces both peak prevalence of symptomatic infection and the number of people who fall ill. At the same time, however, it can prolong an epidemic. A reduction in prevalence is socially beneficial. Prolongation of an epidemic is not. We find that for a large class of infectious diseases, private risk mitigation is socially suboptimal-either too low or too high. The social optimum requires either more or less private mitigation. Since private mitigation effort depends on the cost of mitigation and the cost of illness, interventions that change either of these costs may be used to alter mitigation decisions. We model the potential for instruments that affect the cost of illness to yield net social benefits. We find that where a disease is not very infectious or the duration of illness is short, it may be socially optimal to promote private mitigation effort by increasing the cost of illness. By contrast, where a disease is highly infectious or long lasting, it may be optimal to discourage private mitigation by reducing the cost of disease. Society would prefer a shorter, more intense, epidemic to a longer, less intense epidemic. There is, however, a region in parameter space where the relationship is more complicated. For moderately infectious diseases with medium infectious periods, the social optimum depends on interactions between prevalence and duration. Basic reproduction numbers are not sufficient to predict the social optimum.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goebel, J
2004-02-27
Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less
Weak measurements beyond the Aharonov-Albert-Vaidman formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Shengjun; Li Yang
2011-05-15
We extend the idea of weak measurements to the general case, provide a complete treatment, and obtain results for both the regime when the preselected and postselected states (PPS) are almost orthogonal and the regime when they are exactly orthogonal. We surprisingly find that for a fixed interaction strength, there may exist a maximum signal amplification and a corresponding optimum overlap of PPS to achieve it. For weak measurements in the orthogonal regime, we find interesting quantities that play the same role that weak values play in the nonorthogonal regime.
Stamp, Melanie E M; Jötten, Anna M; Kudella, Patrick W; Breyer, Dominik; Strobl, Florian G; Geislinger, Thomas M; Wixforth, Achim; Westerhausen, Christoph
2016-10-21
Cell adhesion processes are of ubiquitous importance for biomedical applications such as optimization of implant materials. Here, not only physiological conditions such as temperature or pH, but also topographical structures play crucial roles, as inflammatory reactions after surgery can diminish osseointegration. In this study, we systematically investigate cell adhesion under static, dynamic and physiologically relevant conditions employing a lab-on-a-chip system. We screen adhesion of the bone osteosarcoma cell line SaOs-2 on a titanium implant material for pH and temperature values in the physiological range and beyond, to explore the limits of cell adhesion, e.g., for feverish and acidic conditions. A detailed study of different surface roughness R q gives insight into the correlation between the cells' abilities to adhere and withstand shear flow and the topography of the substrates, finding a local optimum at R q = 22 nm. We use shear stress induced by acoustic streaming to determine a measure for the ability of cell adhesion under an external force for various conditions. We find an optimum of cell adhesion for T = 37 °C and pH = 7.4 with decreasing cell adhesion outside the physiological range, especially for high T and low pH. We find constant detachment rates in the physiological regime, but this behavior tends to collapse at the limits of 41 °C and pH 4.
Satellite Communication Hardware Emulation System (SCHES)
NASA Technical Reports Server (NTRS)
Kaplan, Ted
1993-01-01
Satellite Communication Hardware Emulator System (SCHES) is a powerful simulator that emulates the hardware used in TDRSS links. SCHES is a true bit-by-bit simulator that models communications hardware accurately enough to be used as a verification mechanism for actual hardware tests on user spacecraft. As a credit to its modular design, SCHES is easily configurable to model any user satellite communication link, though some development may be required to tailor existing software to user specific hardware.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... preliminarily found that Foshan Nanhai Jiujiang Quan Li Spring Hardware Factory's (``Quan Li'') sale was non-bona fide, and announced our preliminary intent to rescind Quan Li's NSR. For the final results of this review, we continue to find Quan Li's sale to be non-bona fide. Therefore, because there were no other...
ERIC Educational Resources Information Center
Jenson, Jennifer; Rose, Chloë Brushwood
2006-01-01
With the large-scale acquisition and installation of computer and networking hardware in schools across Canada, a major concern has been where to locate these new technologies and whether and how the structure of the school might itself be made to accommodate these new technologies. In this paper, we suggest that the physical location and…
A Proposal for the Creation of a Diagnostics and Power Port Standard
NASA Technical Reports Server (NTRS)
Willeke, Thomas
2005-01-01
The contents of this paper discuss plans for communication failure due to lost hardware during Moon and Mars exploration missions. The author proposes a Diagnostics and Power Port (DPP) creation in the face of total communication failure. DDP would have a number of different power channels to replicate computer diagnostic abilities to find the root cause of failure.
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
NASA Astrophysics Data System (ADS)
Kehoe, S.; Stokes, J.
2011-03-01
Physicochemical properties of hydroxyapatite (HAp) synthesized by the chemical precipitation method are heavily dependent on the chosen process parameters. A Box-Behnken three-level experimental design was therefore, chosen to determine the optimum set of process parameters and their effect on various HAp characteristics. These effects were quantified using design of experiments (DoE) to develop mathematical models using the Box-Behnken design, in terms of the chemical precipitation process parameters. Findings from this research show that the HAp possessing optimum powder characteristics for orthopedic application via a thermal spray technique can therefore be prepared using the following chemical precipitation process parameters: reaction temperature 60 °C, ripening time 48 h, and stirring speed 1500 rpm using high reagent concentrations. Ripening time and stirring speed significantly affected the final phase purity for the experimental conditions of the Box-Behnken design. An increase in both the ripening time (36-48 h) and stirring speed (1200-1500 rpm) was found to result in an increase of phase purity from 47(±2)% to 85(±2)%. Crystallinity, crystallite size, lattice parameters, and mean particle size were also optimized within the research to find desired settings to achieve results suitable for FDA regulations.
Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven
2010-05-01
Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID
2010-09-21
The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1998-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1999-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Hardware Removal in Craniomaxillofacial Trauma
Cahill, Thomas J.; Gandhi, Rikesh; Allori, Alexander C.; Marcus, Jeffrey R.; Powers, David; Erdmann, Detlev; Hollenbeck, Scott T.; Levinson, Howard
2015-01-01
Background Craniomaxillofacial (CMF) fractures are typically treated with open reduction and internal fixation. Open reduction and internal fixation can be complicated by hardware exposure or infection. The literature often does not differentiate between these 2 entities; so for this study, we have considered all hardware exposures as hardware infections. Approximately 5% of adults with CMF trauma are thought to develop hardware infections. Management consists of either removing the hardware versus leaving it in situ. The optimal approach has not been investigated. Thus, a systematic review of the literature was undertaken and a resultant evidence-based approach to the treatment and management of CMF hardware infections was devised. Materials and Methods A comprehensive search of journal articles was performed in parallel using MEDLINE, Web of Science, and ScienceDirect electronic databases. Keywords and phrases used were maxillofacial injuries; facial bones; wounds and injuries; fracture fixation, internal; wound infection; and infection. Our search yielded 529 articles. To focus on CMF fractures with hardware infections, the full text of English-language articles was reviewed to identify articles focusing on the evaluation and management of infected hardware in CMF trauma. Each article’s reference list was manually reviewed and citation analysis performed to identify articles missed by the search strategy. There were 259 articles that met the full inclusion criteria and form the basis of this systematic review. The articles were rated based on the level of evidence. There were 81 grade II articles included in the meta-analysis. Result Our meta-analysis revealed that 7503 patients were treated with hardware for CMF fractures in the 81 grade II articles. Hardware infection occurred in 510 (6.8%) of these patients. Of those infections, hardware removal occurred in 264 (51.8%) patients; hardware was left in place in 166 (32.6%) patients; and in 80 (15.6%) cases, there was no report as to hardware management. Finally, our review revealed that there were no reported differences in outcomes between groups. Conclusions Management of CMF hardware infections should be performed in a sequential and consistent manner to optimize outcome. An evidence-based algorithm for management of CMF hardware infections based on this critical review of the literature is presented and discussed. PMID:25393499
The Future of the Deep Space Network: Technology Development for K2-Band Deep Space Communications
NASA Technical Reports Server (NTRS)
Bhanji, Alaudin M.
1999-01-01
Projections indicate that in the future the number of NASA's robotic deep space missions is likely to increase significantly. A launch rate of up to 4-6 launches per year is projected with up to 25 simultaneous missions active [I]. Future high resolution mapping missions to other planetary bodies as well as other experiments are likely to require increased downlink capacity. These future deep space communications requirements will, according to baseline loading analysis, exceed the capacity of NASA's Deep Space Network in its present form. There are essentially two approaches for increasing the channel capacity of the Deep Space Network. Given the near-optimum performance of the network at the two deep space communications bands, S-Band (uplink 2.025-2.120 GHz, downlink 2.2-2.3 GHz), and X-Band (uplink 7.145-7.19 GHz, downlink 8.48.5 GHz), additional improvements bring only marginal return for the investment. Thus the only way to increase channel capacity is simply to construct more antennas, receivers, transmitters and other hardware. This approach is relatively low-risk but involves increasing both the number of assets in the network and operational costs.
Test and Analysis Capabilities of the Space Environment Effects Team at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Finckenor, M. M.; Edwards, D. L.; Vaughn, J. A.; Schneider, T. A.; Hovater, M. A.; Hoppe, D. T.
2002-01-01
Marshall Space Flight Center has developed world-class space environmental effects testing facilities to simulate the space environment. The combined environmental effects test system exposes temperature-controlled samples to simultaneous protons, high- and low-energy electrons, vacuum ultraviolet (VUV) radiation, and near-ultraviolet (NUV) radiation. Separate chambers for studying the effects of NUV and VUV at elevated temperatures are also available. The Atomic Oxygen Beam Facility exposes samples to atomic oxygen of 5 eV energy to simulate low-Earth orbit (LEO). The LEO space plasma simulators are used to study current collection to biased spacecraft surfaces, arcing from insulators and electrical conductivity of materials. Plasma propulsion techniques are analyzed using the Marshall magnetic mirror system. The micro light gas gun simulates micrometeoroid and space debris impacts. Candidate materials and hardware for spacecraft can be evaluated for durability in the space environment with a variety of analytical techniques. Mass, solar absorptance, infrared emittance, transmission, reflectance, bidirectional reflectance distribution function, and surface morphology characterization can be performed. The data from the space environmental effects testing facilities, combined with analytical results from flight experiments, enable the Environmental Effects Group to determine optimum materials for use on spacecraft.
Some challenges in designing a lunar, Martian, or microgravity CELSS.
Salisbury, F B
1992-01-01
The design of a bioregenerative life-support system (a Controlled Ecological Life-Support System or CELSS) for long-duration stays on the moon, Mars, or in a space craft poses formidable problems in engineering and in theory. Technological (hardware) problems include: (1) Creation and control of gas composition and pressure, temperature, light, humidity, and air circulation, especially in microgravity to 1/3 xg and in the vacuum of space. Light (energy demanding), CO2 levels, and the rooting media are special problems for plants. (2) Developing specialized equipment for food preparation. (3) Equipment development for waste recycling. (4) Development of computer systems for environmental monitoring and control as well as several other functions. Problems of theory (software) include: (1) Determining crop species and cultivars (some bred especially for CELSS). (2) Optimum environments and growing and harvesting techniques for each crop. (3) Best and most efficient food-preparation techniques and required equipment. (4) Best and most efficient waste-recycling techniques and equipment. This topic includes questions about the extent of closure, resupply, and waste storage. (5) How to achieve long-term stability. (6) How to avoid catastrophic failures--and how to recover from near-catastrophic failures (for example, plant diseases). Many problems must be solved.
An operations manual for the digital data system
NASA Technical Reports Server (NTRS)
Jones, Michael G.
1988-01-01
The Digital Data System (DDS) was designed to incorporate the analog-to-digital conversion process into the initial data acquisition stage and to store the data in a digital format. This conversion is done as part of the acquisition process. Consequently, the data are ready to be analyzed as soon as the test is completed. This capability permits the researcher to alter test parameters during the course of the experiment based on the information acquired in a prior portion of the test. The DDS is currently able to simultaneously acquire up to 10 channels of data. The purpose of this document is fourfold: (1) to describe the capabilities of the hardware in sufficient detail to allow the reader to determine whether the DDS is the optimum system for a particular experiment; (2) to present some of the more significant software developed to provide analyses within a short time of the completion of data acquisition; (3) to provide the reader with sample runs of major software routines to demonstrate their convenience and simple usage; and (4) a portion of the document is used to describe software which uses an FFT-box to provide a means of comparison against which the DDS can be checked.
Gaseous Non-Premixed Flame Research Planned for the International Space Station
NASA Technical Reports Server (NTRS)
Stocker, Dennis P.; Takahashi, Fumiaki; Hickman, J. Mark; Suttles, Andrew C.
2014-01-01
Thus far, studies of gaseous diffusion flames on the International Space Station (ISS) have been limited to research conducted in the Microgravity Science Glovebox (MSG) in mid-2009 and early 2012. The research was performed with limited instrumentation, but novel techniques allowed for the determination of the soot temperature and volume fraction. Development is now underway for the next experiments of this type. The Advanced Combustion via Microgravity Experiments (ACME) project consists of five independent experiments that will be conducted with expanded instrumentation within the stations Combustion Integrated Rack (CIR). ACMEs goals are to improve our understanding of flame stability and extinction limits, soot control and reduction, oxygen-enriched combustion which could enable practical carbon sequestration, combustion at fuel lean conditions where both optimum performance and low emissions can be achieved, the use of electric fields for combustion control, and materials flammability. The microgravity environment provides longer residence times and larger length scales, yielding a broad range of flame conditions which are beneficial for simplified analysis, e.g., of limit behaviour where chemical kinetics are important. The detailed design of the modular ACME hardware, e.g., with exchangeable burners, is nearing completion, and it is expected that on-orbit testing will begin in 2016.
Waste heat recovery from adiabatic diesel engines by exhaust-driven Brayton cycles
NASA Technical Reports Server (NTRS)
Khalifa, H. E.
1983-01-01
An evaluation of Bryton Bottoming Systems (BBS) as waste heat recovery devices for future adiabatic diesel engines in heavy duty trucks is presented. Parametric studies were performed to evaluate the influence of external and internal design parameters on BBS performance. Conceptual design and trade-off studies were undertaken to estimate the optimum configuration, size, and cost of major hardware components. The potential annual fuel savings of long-haul trucks equipped with BBS were estimated. The addition of a BBS to a turbocharged, nonaftercooled adiabatic engine would improve fuel economy by as much as 12%. In comparison with an aftercooled, turbocompound engine, the BBS-equipped turbocharged engine would offer a 4.4% fuel economy advantage. If installed in tandem with an aftercooled turbocompound engine, the BBS could effect a 7.2% fuel economy improvement. The cost of a mass-produced 38 Bhp BBS is estimated at about $6460 or 170/Bhp. Technical and economic barriers that hinder the commercial introduction of bottoming systems were identified. Related studies in the area of waste heat recovery from adiabatic diesel engines and NASA-CR-168255 (Steam Rankine) and CR-168256 (Organic Rankine).
Simulation of single-molecule trapping in a nanochannel
Robinson, William Neil; Davis, Lloyd M.
2010-01-01
The detection and trapping of single fluorescent molecules in solution within a nanochannel is studied using numerical simulations. As optical forces are insufficient for trapping molecules much smaller than the optical wavelength, a means for sensing a molecule’s position along the nanochannel and adjusting electrokinetic motion to compensate diffusion is assessed. Fluorescence excitation is provided by two adjacently focused laser beams containing temporally interleaved laser pulses. Photon detection is time-gated, and the displacement of the molecule from the middle of the two foci alters the count rates collected in the two detection channels. An algorithm for feedback control of the electrokinetic motion in response to the timing of photons, to reposition the molecule back toward the middle for trapping and to rapidly reload the trap after a molecule photobleaches or escapes, is evaluated. While accommodating the limited electrokinetic speed and the finite latency of feedback imposed by experimental hardware, the algorithm is shown to be effective for trapping fast-diffusing single-chromophore molecules within a micron-sized confocal region. Studies show that there is an optimum laser power for which loss of molecules from the trap due to either photobleaching or shot-noise fluctuations is minimized. PMID:20799801
Some challenges in designing a lunar, Martian, or microgravity CELSS
NASA Astrophysics Data System (ADS)
Salisbury, Frank B.
The design of a bioregenerative life-support system (a Controlled Ecological Life-Support System or CELSS) for long-duration stays on the moon, Mars, or in a space craft poses formidable problems in engineering and in theory. Technological (hardware) problems include: (1) Creation and control of gas composition and pressure, temperature, light, humidity, and air circulation, especially in microgravity to 1/3xg and in the vacuum of space. Light (energy demanding), CO 2 levels, and the rooting media are special problems for plants. (2) Developing specialized equipment for food preparation. (3) Equipment development for waste recycling. (4) Development of computer systems for environmental monitoring and control as well as several other functions. Problems of theory (software) include: (1) Determining crop species and cultivars (some bred especially for CELSS). (2) Optimum environments and growing and harvesting techniques for each crop. (3) Best and most efficient food-preparation techniques and required equipment. (4) Best and most efficient waste-recycling techniques and equipment. This topic includes questions about the extent of closure, resupply, and waste storage. (5) How to achieve long-term stability. (6) How to avoid catastrophic failures-and how to recover from near-catastrophic failures (for example, plant diseases). Many problems must be solved.
Instrumentation and control system architecture of ECRH SST1
NASA Astrophysics Data System (ADS)
Patel, Harshida; Patel, Jatin; purohit, Dharmesh; Shukla, B. K.; Babu, Rajan; Mistry, Hardik
2017-07-01
The Electron Cyclotron Resonance Heating (ECRH) system is an important heating system for the reliable start-up of tokamak. The 42GHz and 82.6GHz Gyrotron based ECRH systems are used in tokomaks SST-1 and Aditya to carry out ECRH related experiments. The Gyrotrons are high power microwave tubes used as a source for ECRH systems. The Gyrotrons need to be handled with optimum care right from the installation to its Full parameter control operation. The Gyrotrons are associated with the subsystems like: High voltage power supplies (Beam voltage and anode voltage), dedicated crowbar system, magnet, filament and ion pump power supplies and cooling system. The other subsystems are transmission line, launcher and dummy load. A dedicated VME based data acquisition & control (DAC) system is developed to operate and control the Gyrotron and its associated sub system. For the safe operation of Gyrotron, two level interlocks with fail-safe logic are developed. Slow signals that are operated in scale of millisecond range are programmed through software and hardware interlock in scale of microsecond range are designed and developed indigenously. Water-cooling and the associated interlock are monitored and control by data logger with independent human machine interface.
Postflight hardware evaluation 360T021 (RSRM-21, STS-45), revision A
NASA Technical Reports Server (NTRS)
Maccauly, Linda E.
1992-01-01
The Final Postflight Hardware Evaluation Report 360T021 (RSRM-21, STS-45) is included. All observed hardware conditions were documented on Postflight Observation Reports (PFOR's) and included in Appendices A through E. This report, along with the KSC Ten-Day Postflight Hardware Evaluation Report represents a summary of the 360T021 hardware evaluation.
Flight Avionics Hardware Roadmap
NASA Technical Reports Server (NTRS)
Some, Raphael; Goforth, Monte; Chen, Yuan; Powell, Wes; Paulick, Paul; Vitalpur, Sharada; Buscher, Deborah; Wade, Ray; West, John; Redifer, Matt;
2014-01-01
The Avionics Technology Roadmap takes an 80% approach to technology investment in spacecraft avionics. It delineates a suite of technologies covering foundational, component, and subsystem-levels, which directly support 80% of future NASA space mission needs. The roadmap eschews high cost, limited utility technologies in favor of lower cost, and broadly applicable technologies with high return on investment. The roadmap is also phased to support future NASA mission needs and desires, with a view towards creating an optimized investment portfolio that matures specific, high impact technologies on a schedule that matches optimum insertion points of these technologies into NASA missions. The roadmap looks out over 15+ years and covers some 114 technologies, 58 of which are targeted for TRL6 within 5 years, with 23 additional technologies to be at TRL6 by 2020. Of that number, only a few are recommended for near term investment: 1. Rad Hard High Performance Computing 2. Extreme temperature capable electronics and packaging 3. RFID/SAW-based spacecraft sensors and instruments 4. Lightweight, low power 2D displays suitable for crewed missions 5. Radiation tolerant Graphics Processing Unit to drive crew displays 6. Distributed/reconfigurable, extreme temperature and radiation tolerant, spacecraft sensor controller and sensor modules 7. Spacecraft to spacecraft, long link data communication protocols 8. High performance and extreme temperature capable C&DH subsystem In addition, the roadmap team recommends several other activities that it believes are necessary to advance avionics technology across NASA: center dot Engage the OCT roadmap teams to coordinate avionics technology advances and infusion into these roadmaps and their mission set center dot Charter a team to develop a set of use cases for future avionics capabilities in order to decouple this roadmap from specific missions center dot Partner with the Software Steering Committee to coordinate computing hardware and software technology roadmaps and investment recommendations center dot Continue monitoring foundational technologies upon which future avionics technologies will be dependent, e.g., RHBD and COTS semiconductor technologies
NASA Astrophysics Data System (ADS)
Escalona, Luis; Díaz-Montiel, Paulina; Venkataraman, Satchi
2016-04-01
Laminated carbon fiber reinforced polymer (CFRP) composite materials are increasingly used in aerospace structures due to their superior mechanical properties and reduced weight. Assessing the health and integrity of these structures requires non-destructive evaluation (NDE) techniques to detect and measure interlaminar delamination and intralaminar matrix cracking damage. The electrical resistance change (ERC) based NDE technique uses the inherent changes in conductive properties of the composite to characterize internal damage. Several works that have explored the ERC technique have been limited to thin cross-ply laminates with simple linear or circular electrode arrangements. This paper investigates a method of optimum selection of electrode configurations for delamination detection in thick cross-ply laminates using ERC. Inverse identification of damage requires numerical optimization of the measured response with a model predicted response. Here, the electrical voltage field in the CFRP composite laminate is calculated using finite element analysis (FEA) models for different specified delamination size and locations, and location of ground and current electrodes. Reducing the number of sensor locations and measurements is needed to reduce hardware requirements, and computational effort needed for inverse identification. This paper explores the use of effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations of selecting a pair of electrodes among the n electrodes. To enable use of EI to ERC required, it is proposed in this research a singular value decomposition SVD to obtain a spectral representation of the resistance measurements in the laminate. The effectiveness of EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set of resistance measurements and the reduced set of measurements. The investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERC based damage detection.
Optimum projection pattern generation for grey-level coded structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-04-01
Structured light illumination (SLI) systems are well-established optical inspection techniques for noncontact 3D surface measurements. A common technique is multi-frequency sinusoidal SLI that obtains the phase map at various fringe periods in order to estimate the absolute phase, and hence, the 3D surface information. Nevertheless, multi-frequency SLI systems employ multiple measurement planes (e.g. four phase shifted frames) to obtain the phase at a given fringe period. It is therefore an age old challenge to obtain the absolute surface information using fewer measurement frames. Grey level (GL) coding techniques have been developed as an attempt to reduce the number of planes needed, because a spatio-temporal GL sequence employing p discrete grey-levels and m frames has the potential to unwrap up to pm fringes. Nevertheless, one major disadvantage of GL based SLI techniques is that there are often errors near the border of each stripe, because an ideal stepwise intensity change cannot be measured. If the step-change in intensity is a single discrete grey-level unit, this problem can usually be overcome by applying an appropriate threshold. However, severe errors occur if the intensity change at the border of the stripe exceeds several discrete grey-level units. In this work, an optimum GL based technique is presented that generates a series of projection patterns with a minimal gradient in the intensity. It is shown that when using this technique, the errors near the border of the stripes can be significantly reduced. This improvement is achieved with the choice generated patterns, and does not involve additional hardware or special post-processing techniques. The performance of that method is validated using both simulations and experiments. The reported technique is generic, works with an arbitrary number of frames, and can employ an arbitrary number of grey-levels.
A Plug and Play GNC Architecture Using FPGA Components
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Kaneshige, J.; Waterman, R.; Pires, C.; Ippoloito, C.
2005-01-01
The goal of Plug and Play, or PnP, is to allow hardware and software components to work together automatically, without requiring manual setup procedures. As a result, new or replacement hardware can be plugged into a system and automatically configured with the appropriate resource assignments. However, in many cases it may not be practical or even feasible to physically replace hardware components. One method for handling these types of situations is through the incorporation of reconfigurable hardware such as Field Programmable Gate Arrays, or FPGAs. This paper describes a phased approach to developing a Guidance, Navigation, and Control (GNC) architecture that expands on the traditional concepts of PnP, in order to accommodate hardware reconfiguration without requiring detailed knowledge of the hardware. This is achieved by establishing a functional based interface that defines how the hardware will operate, and allow the hardware to reconfigure itself. The resulting system combines the flexibility of manipulating software components with the speed and efficiency of hardware.
Movable Ground Based Recovery System for Reuseable Space Flight Hardware
NASA Technical Reports Server (NTRS)
Sarver, George L. (Inventor)
2013-01-01
A reusable space flight launch system is configured to eliminate complex descent and landing systems from the space flight hardware and move them to maneuverable ground based systems. Precision landing of the reusable space flight hardware is enabled using a simple, light weight aerodynamic device on board the flight hardware such as a parachute, and one or more translating ground based vehicles such as a hovercraft that include active speed, orientation and directional control. The ground based vehicle maneuvers itself into position beneath the descending flight hardware, matching its speed and direction and captures the flight hardware. The ground based vehicle will contain propulsion, command and GN&C functionality as well as space flight hardware landing cushioning and retaining hardware. The ground based vehicle propulsion system enables longitudinal and transverse maneuverability independent of its physical heading.
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
Hardware in the Loop at Megawatt-Scale Power | Energy Systems Integration
Facility | NREL Hardware in the Loop at Megawatt-Scale Power Hardware in the Loop at Megawatt -Scale Power Hardware-in-the-loop simulation is not new, but the Energy System Integration Facility's -in-the-loop co-simulation. For more information, read the power hardware-in-the-loop factsheet. Text
Exercise Countermeasure Hardware Evolution on ISS: The First Decade.
Korth, Deborah W
2015-12-01
The hardware systems necessary to support exercise countermeasures to the deconditioning associated with microgravity exposure have evolved and improved significantly during the first decade of the International Space Station (ISS), resulting in both new types of hardware and enhanced performance capabilities for initial hardware items. The original suite of countermeasure hardware supported the first crews to arrive on the ISS and the improved countermeasure system delivered in later missions continues to serve the astronauts today with increased efficacy. Due to aggressive hardware development schedules and constrained budgets, the initial approach was to identify existing spaceflight-certified exercise countermeasure equipment, when available, and modify it for use on the ISS. Program management encouraged the use of commercial-off-the-shelf (COTS) hardware, or hardware previously developed (heritage hardware) for the Space Shuttle Program. However, in many cases the resultant hardware did not meet the additional requirements necessary to support crew health maintenance during long-duration missions (3 to 12 mo) and anticipated future utilization activities in support of biomedical research. Hardware development was further complicated by performance requirements that were not fully defined at the outset and tended to evolve over the course of design and fabrication. Modifications, ranging from simple to extensive, were necessary to meet these evolving requirements in each case where heritage hardware was proposed. Heritage hardware was anticipated to be inherently reliable without the need for extensive ground testing, due to its prior positive history during operational spaceflight utilization. As a result, developmental budgets were typically insufficient and schedules were too constrained to permit long-term evaluation of dedicated ground-test units ("fleet leader" type testing) to identify reliability issues when applied to long-duration use. In most cases, the exercise unit with the most operational history was the unit installed on the ISS.
Development and characterization of couscous-like product using bulgur flour as by-product.
Yuksel, Ayse Nur; Öner, Mehmet Durdu; Bayram, Mustafa
2017-12-01
Couscous is produced traditionally by agglomeration of Triticum durum semolina with water. The aims of this study were: to produce couscous-like product by substitution of semolina with bulgur by-product (undersize bulgur); to find optimum quantity of bulgur flour and processing conditions. In order to determine the optimum processing parameters and recipes; 0, 25 and 50% of bulgur containing couscous-like samples were prepared. The color, yield, sensory properties, total phenol and flavonoid contents, bulk density, protein and ash content, texture properties were determined. Two different types of dryer e.g. packed bed and microwave were used. Optimum parameters were predicted as 50% of bulgur flour for packed bed (60 °C) and microwave (180 W) drying with 50% (w/w) of water according to yields, color (L*, a*, b*) values and sensory properties (color, odor, general appearance). For packed bed drying at 60 °C yields were 54.28 ± 3.78, 47.70 ± 1.73 and 52.57 ± 7.04% for 0, 25 and 50% bulgur flour containing samples, respectively. Lightness (L*) values of couscous-like samples were decreased with increasing the quantity of bulgur flour after both drying processes. Results of sensory analysis revealed that couscous-like bulgur were more preferable for consumers.
Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee
2014-02-01
Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Oladzad, Sepideh; Fallah, Narges; Nasernejad, Bahram
2017-07-01
In the present study a combination of a novel coalescing oil water separator (COWS) and electrocoagulation (EC) technique was used for treatment of petroleum product contaminated groundwater. In the first phase, COWS was used as the primary treatment. Two different types of coalescing media and two levels of flow rates were examined in order to find the optimum conditions. The effluent of COWS was collected in optimum conditions and was treated using an EC process in the second phase of the research. In this phase, preliminary experiments were conducted in order to investigate the effect of EC reaction time and sedimentation time on chemical oxygen demand (COD) removal efficiency. Best conditions for EC reaction time and sedimentation time were obtained to be 5 min and 30 min, respectively. Response surface methodology was applied to evaluate the effect of initial pH, current density and aeration rate on settling velocity (V s ) and effluent COD. The optimum conditions, for achieving maximum values of V s as well as the values of effluent COD, in the range of results were obtained at conditions of 7, 34 mA·cm -2 and 1.5 L·min -1 for initial pH, current density and aeration rate, respectively.
Veggie and the VEG-01 Hardware Validation Test
NASA Technical Reports Server (NTRS)
Massa, Gioia; wheeler, Ray; Smith, Trent
2015-01-01
This presentation presents a brief overview of KSC plant science hardware for space and then details the Veggie hardware and the VEG-01 hardware validation test. The test results and future plans are discussed.
System on chip module configured for event-driven architecture
Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.
2017-10-17
A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.
1979-03-28
TECHNICAL REPORT T-79-43 TRI- FAST HARDWARE-IN-THE-LOOP SIMULATION Volume 1: Trn FAST Hardware-In-the. Loop Simulation at the Advanced Simulation...Identify by block number) Tri- FAST Hardware-in-the-Loop ACSL Advanced Simulation Center Simulation RF Target Models I a. AfIACT ( sin -oveme skit N nem...e n tdositr by block number) The purpose of this report is to document the Tri- FAST missile simulation development and the seeker hardware-in-the
Olson, Eric J.
2013-06-11
An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).
Rail transit fare collection: Policy and technology assessment
NASA Technical Reports Server (NTRS)
Deshpande, G. K.; Cucchissi, J.; Heft, R. C.
1982-01-01
The impact of fare policies and fare structure on the selection of equipment was investigated, fare collection systems are described, hardware and technology related problems are documented, and the requirements of a fare collection simulation model are outlined. Major findings include: (1) a wide variation in the fare collection systems and equipment, caused primarily by historical precedence; (2) the reliability of AFC equipment used at BART and WMATA discouraged other properties from considering use of similar equipment; (3) existing equipment may not meet the fare collection needs of properties in the near future; (4) the cost of fare collection operation and maintenance is high; and (5) the relatively small market in fare collection equipment discourages new product development by suppliers. Recommendations for fare collection R&D programs include development of new hardware to meet rail transit needs, study of impacts of alternate fare policies increased communication among policymakers, and consensus on fare policy issues.
NASA Astrophysics Data System (ADS)
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lan, Jin; Yu, Weichao; Wu, Ruqian
A diode, a device allowing unidirectional signal transmission, is a fundamental element of logic structures, and it lies at the heart of modern information systems. The spin wave or magnon, representing a collective quasiparticle excitation of the magnetic order in magnetic materials, is a promising candidate for an information carrier for the next-generation energy-saving technologies. Here, we propose a scalable and reprogrammable pure spin-wave logic hardware architecture using domain walls and surface anisotropy stripes as waveguides on a single magnetic wafer. We demonstrate theoretically the design principle of the simplest logic component, a spin-wave diode, utilizing the chiral bound statesmore » in a magnetic domain wall with a Dzyaloshinskii-Moriya interaction, and confirm its performance through micromagnetic simulations. As a result, these findings open a new vista for realizing different types of pure spin-wave logic components and finally achieving an energy-efficient and hardware-reprogrammable spin-wave computer.« less
Overview of the Systems Special Investigation Group investigation
NASA Technical Reports Server (NTRS)
Mason, James B.; Dursch, Harry; Edelman, Joel
1993-01-01
The Long Duration Exposure Facility (LDEF) carried a remarkable variety of electrical, mechanical, thermal, and optical systems, subsystems, and components. Nineteen of the fifty-seven experiments flown on LDEF contained functional systems that were active on-orbit. Almost all of the other experiments possessed at least a few specific components of interest to the Systems Special Investigation Group (Systems SIG), such as adhesives, seals, fasteners, optical components, and thermal blankets. Almost all top level functional testing of the active LDEF and experiment systems has been completed. Failure analysis of both LDEF hardware and individual experiments that failed to perform as designed has also been completed. Testing of system components and experimenter hardware of interest to the Systems SIG is ongoing. All available testing and analysis results were collected and integrated by the Systems SIG. An overview of our findings is provided. An LDEF Optical Experiment Database containing information for all 29 optical related experiments is also discussed.
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)
Li, Isaac TS; Shum, Warren; Truong, Kevin
2007-01-01
Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).
Li, Isaac T S; Shum, Warren; Truong, Kevin
2007-06-07
To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.
Lan, Jin; Yu, Weichao; Wu, Ruqian; ...
2015-12-28
A diode, a device allowing unidirectional signal transmission, is a fundamental element of logic structures, and it lies at the heart of modern information systems. The spin wave or magnon, representing a collective quasiparticle excitation of the magnetic order in magnetic materials, is a promising candidate for an information carrier for the next-generation energy-saving technologies. Here, we propose a scalable and reprogrammable pure spin-wave logic hardware architecture using domain walls and surface anisotropy stripes as waveguides on a single magnetic wafer. We demonstrate theoretically the design principle of the simplest logic component, a spin-wave diode, utilizing the chiral bound statesmore » in a magnetic domain wall with a Dzyaloshinskii-Moriya interaction, and confirm its performance through micromagnetic simulations. As a result, these findings open a new vista for realizing different types of pure spin-wave logic components and finally achieving an energy-efficient and hardware-reprogrammable spin-wave computer.« less
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Use of scan overlap redundancy to enhance multispectral aircraft scanner data
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Keat, J.
1973-01-01
Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.
NASA Technical Reports Server (NTRS)
Schroeder, Daniel J.
1992-01-01
The Optics Alignment Panel (OAP) was commissioned by the HST Science Working Group to determine the optimum alignment of the OTA optics. The goal was to find the position of the secondary mirror (SM) for which there is no coma or astigmatism in the camera images due to misaligned optics, either tilt or decenter. The despace position was reviewed of the SM and the optimum focus was sought. The results of these efforts are as follows: (1) the best estimate of the aligned position of the SM in the notation of HDOS is (DZ,DY,TZ,TY) = (+248 microns, +8 microns, +53 arcsec, -79 arcsec), and (2) the best focus, defined to be that despace which maximizes the fractional energy at 486 nm in a 0.1 arcsec radius of a stellar image, is 12.2 mm beyond paraxial focus. The data leading to these conclusions, and the estimated uncertainties in the final results, are presented.
Srivastava, Garima; Singh, Kritika; Talat, Mahe; Srivastava, Onkar Nath; Kayastha, Arvind M.
2014-01-01
β-Amylase finds application in food and pharmaceutical industries. Functionalized graphene sheets were customised as a matrix for covalent immobilization of Fenugreek β-amylase using glutaraldehyde as a cross-linker. The factors affecting the process were optimized using Response Surface Methodology based Box-Behnken design of experiment which resulted in 84% immobilization efficiency. Scanning and Transmission Electron Microscopy (SEM, TEM) and Fourier Tansform Infrared (FTIR) spectroscopy were employed for the purpose of characterization of attachment of enzyme on the graphene. The enzyme kinetic studies were carried out for obtaining best catalytic performance and enhanced reusability. Optimum temperature remained unchanged, whereas optimum pH showed shift towards acidic range for immobilized enzyme. Increase in thermal stability of immobilized enzyme and non-toxic nature of functionalized graphene can be exploited for production of maltose in food and pharmaceutical industries. PMID:25412079
Lavado Contador, J F; Maneta, M; Schnabel, S
2006-10-01
The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.
Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load
NASA Astrophysics Data System (ADS)
Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.
2017-12-01
Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.
Lee, Sang Heon
2013-05-01
BiSrCaCuO superconductor thick films were prepared at several curing temperatures, and their electro-physical properties were determined to find an optimum fabrication conditions. Critical temperatures of the superconductors were decreased with increasing melting temperature, which was related to the amount of equilibrium phases of the superconducting materials with temperature. The critical temperature of BiSrCaCuO bulk and thick film superconductors were 107 K and 96 K, respectively. The variation of susceptibility of the superconductor thick film formed at 950 degrees C had multi-step-type curve for 70 G externally applied field, whereas, a superconductor thick film formed at 885 degrees C had a single step-type curve like a bulk BiSrCaCuO ceramic superconductor in the temperature-susceptibility curves. A partial melting at 865 degrees C is one of optimum conditions for making a superconductor thick film with a relatively homogeneous phase.
Comparison of citrus orchard inventory using LISS-III and LISS-IV data
NASA Astrophysics Data System (ADS)
Singh, Niti; Chaudhari, K. N.; Manjunath, K. R.
2016-04-01
In India, in terms of area under cultivation, citrus is the third most cultivated fruit crop after Banana and Mango. Among citrus group, lime is one of the most important horticultural crops in India as the demand for its consumption is very high. Hence, preparing citrus crop inventories using remote sensing techniques would help in maintaining a record of its area and production statistics. This study shows how accurately citrus orchard can be classified using both IRS Resourcesat-2 LISS-III and LISS-IV data and depicts the optimum bio-widow for procuring satellite data to achieve high classification accuracy required for maintaining inventory of crop. Findings of the study show classification accuracy increased from 55% (using LISS-III) to 77% (using LISS-IV). Also, according to classified outputs and NDVI values obtained, April and May months were identified as optimum bio-window for citrus crop identification.
Airfoil optimization by the one-shot method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1994-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Attosecond control of electron beams at dielectric and absorbing membranes
NASA Astrophysics Data System (ADS)
Morimoto, Yuya; Baum, Peter
2018-03-01
Ultrashort electron pulses are crucial for time-resolved electron diffraction and microscopy of the fundamental light-matter interaction. In this work, we study experimentally and theoretically the generation and characterization of attosecond electron pulses by optical-field-driven compression and streaking at dielectric or absorbing interaction elements. The achievable acceleration and deflection gradient depends on the laser-electron angle, the laser's electric and magnetic field directions, and the foil orientation. Electric and magnetic fields have similar contributions to the final effect and both need to be considered. Experiments and theory agree well and reveal the optimum conditions for highly efficient, velocity-matched electron-field interactions in the longitudinal or transverse direction. We find that metallic membranes are optimum for light-electron control at mid-infrared or terahertz wavelengths, but dielectric membranes are excellent in the visible and near-infrared regimes and are therefore ideal for the formation of attosecond electron pulses.
Fast pyrolysis of oil palm shell (OPS)
NASA Astrophysics Data System (ADS)
Abdullah, Nurhayati; Sulaiman, Fauziah; Aliasak, Zalila
2015-04-01
Biomass is an important renewable source of energy. Residues that are obtained from harvesting and agricultural products can be utilised as fuel for energy generation by conducting any thermal energy conversion technology. The conversion of biomass to bio oil is one of the prospective alternative energy resources. Therefore, in this study fast pyrolysis of oil palm shell was conducted. The main objective of this study was to find the optimum condition for high yield bio-oil production. The experiment was conducted using fixed-bed fluidizing pyrolysis system. The biomass sample was pyrolysed at variation temperature of 450°C - 650°C and at variation residence time of 0.9s - 1.35s. The results obtained were further discussed in this paper. The basic characteristic of the biomass sample was also presented here. The experiment shows that the optimum bio-oil yield was obtained at temperature of 500°C at residence time 1.15s.
Constraint-Based Local Search for Constrained Optimum Paths Problems
NASA Astrophysics Data System (ADS)
Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal
Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.
Evaluation of methods for determining hardware projected life
NASA Technical Reports Server (NTRS)
1971-01-01
An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.
Evaluating WHO Healthy Cities in Europe: issues and perspectives.
de Leeuw, Evelyne
2013-10-01
In this introductory article, we situate the findings of the Phase IV evaluation effort of the WHO European Healthy Cities Network in its historic evolutionary development. We review each of the contributions to this supplement in terms of the theoretical and methodological frameworks applied. Although the findings of each are both relevant and generated with a scholarly rigor that is appropriate to the context in which the evaluation took place, we find that particularly these contextual factors have not contributed to optimum quality of research. Any drawbacks in individual contributions cannot be attributed to their analysts and authors but relate to the complicated and evolving nature of the project. These factors are also reviewed.
Toward Evolvable Hardware Chips: Experiments with a Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Stoica, Adrian
1998-01-01
Evolvable Hardware is reconfigurable hardware that self-configures under the control of an evolutionary algorithm. We search for a hardware configuration can be performed using software models or, faster and more accurate, directly in reconfigurable hardware. Several experiments have demonstrated the possibility to automatically synthesize both digital and analog circuits. The paper introduces an approach to automated synthesis of CMOS circuits, based on evolution on a Programmable Transistor Array (PTA). The approach is illustrated with a software experiment showing evolutionary synthesis of a circuit with a desired DC characteristic. A hardware implementation of a test PTA chip is then described, and the same evolutionary experiment is performed on the chip demonstrating circuit synthesis/self-configuration directly in hardware.
Technique of Automated Control Over Cardiopulmonary Resuscitation Procedures
NASA Astrophysics Data System (ADS)
Bureev, A. Sh; Kiseleva, E. Yu; Kutsov, M. S.; Zhdanov, D. S.
2016-01-01
The article describes a technique of automated control over cardiopulmonary resuscitation procedures on the basis of acoustic data. The research findings have allowed determining the primary important characteristics of acoustic signals (sounds of blood circulation in the carotid artery and respiratory sounds) and proposing a method to control the performance of resuscitation procedures. This method can be implemented as a part of specialized hardware systems.
A Compatible Hardware/Software Reliability Prediction Model.
1981-07-22
machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed
Systolic Signal Processor/High Frequency Direction Finding
1990-10-01
MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description
Apollo/Skylab suit program-management systems study, volume 1
NASA Technical Reports Server (NTRS)
Mcniff, M.
1974-01-01
A management systems study for future spacesuit programs was conducted to assess past suit program requirements and management systems in addition to new and modified systems in order to identify the most cost effective methods for use during future spacesuit programs. The effort and its findings concerned the development and production of all hardware ranging from crew protective gear to total launch vehicles.
ERIC Educational Resources Information Center
Brusling, Christer; Tingsell, Jan-Gunnar
This new model for the supervision of student teachers utilizes videotaping hardware which allows the student teacher and his supervisor to evaluate teaching methods and behavior. Thus, the student teacher is better able to supervise himself. Employing Flanders Interaction Analysis, the student is able to interpret his teaching on closed-circuit…
Optimal glass-ceramic structures: Components of giant mirror telescopes
NASA Technical Reports Server (NTRS)
Eschenauer, Hans A.
1990-01-01
Detailed investigations are carried out on optimal glass-ceramic mirror structures of terrestrial space technology (optical telescopes). In order to find an optimum design, a nonlinear multi-criteria optimization problem is formulated. 'Minimum deformation' at 'minimum weight' are selected as contradictory objectives, and a set of further constraints (quilting effect, optical faults etc.) is defined and included. A special result of the investigations is described.
Research in Network Management Techniques for Tactical Data Communications Network.
1982-09-01
the control period. Research areas include Packet Network modelling, adaptive network routing, network design algorithms, network design techniques...contro!lers are designed to perform their limited tasks optimally. For the dynamic routing problem considered here, the local controllers are node...feedback to finding in optimum stead-o-state routing (static strategies) under non - control which can be easily implemented in real time. congested
3D Printed Composites for Topology Transforming Multifunctional Devices
2017-01-26
approach to find non -trivial designs. The comparison against experimental measurements motivates future research on improving the accuracy of the...new methodology for the fabrication and the design of new multifunctional composites and devices using 3D printing. The main accomplishments of this...design; 6) developing a finite element framework for the optimum design of PACS by topology optimization; 7) optimizing and experimentally
Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel
2008-01-01
Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Computational study of energy filtering effects in one-dimensional composite nano-structures
NASA Astrophysics Data System (ADS)
Kim, Raseong; Lundstrom, Mark S.
2012-01-01
Possibilities to improve the Seebeck coefficient S versus electrical conductance G trade-off of diffusive composite nano-structures are explored using an electro-thermal simulation framework based on the non-equilibrium Green's function method for quantum electron transport and the lattice heat diffusion equation. We examine the role of the grain size d, potential barrier height ΦB, grain doping, and the lattice thermal conductivity κL using a one-dimensional model structure. For a uniform κL, simulation results show that the power factor of a composite structure may be improved over bulk with the optimum ΦB being about kBT, where kB and T are the Boltzmann constant and the temperature, respectively. An optimum ΦB occurs because the current flow near the Fermi level is not obstructed too much while S still improves due to barriers. The optimum grain size dopt is significantly longer than the momentum relaxation length λp so that G is not seriously degraded due to the barriers, and dopt is comparable to or somewhat larger than the energy relaxation length λE so that the carrier energy is not fully relaxed within the grain and |S| remains high. Simulation results also show that if κL in the barrier region is smaller than in the grain, S and power factor are further improved. In such cases, the optimum ΦB and dopt increase, and the power factor may improve even for ΦB (d) significantly higher (longer) than kBT (λE). We find that the results from this quantum mechanical approach are readily understood using a simple, semi-classical model.
Theory on the Coupled Stochastic Dynamics of Transcription and Splice-Site Recognition
Murugan, Rajamanickam; Kreiman, Gabriel
2012-01-01
Eukaryotic genes are typically split into exons that need to be spliced together to form the mature mRNA. The splicing process depends on the dynamics and interactions among transcription by the RNA polymerase II complex (RNAPII) and the spliceosomal complex consisting of multiple small nuclear ribonucleo proteins (snRNPs). Here we propose a biophysically plausible initial theory of splicing that aims to explain the effects of the stochastic dynamics of snRNPs on the splicing patterns of eukaryotic genes. We consider two different ways to model the dynamics of snRNPs: pure three-dimensional diffusion and a combination of three- and one-dimensional diffusion along the emerging pre-mRNA. Our theoretical analysis shows that there exists an optimum position of the splice sites on the growing pre-mRNA at which the time required for snRNPs to find the 5′ donor site is minimized. The minimization of the overall search time is achieved mainly via the increase in non-specific interactions between the snRNPs and the growing pre-mRNA. The theory further predicts that there exists an optimum transcript length that maximizes the probabilities for exons to interact with the snRNPs. We evaluate these theoretical predictions by considering human and mouse exon microarray data as well as RNAseq data from multiple different tissues. We observe that there is a broad optimum position of splice sites on the growing pre-mRNA and an optimum transcript length, which are roughly consistent with the theoretical predictions. The theoretical and experimental analyses suggest that there is a strong interaction between the dynamics of RNAPII and the stochastic nature of snRNP search for 5′ donor splicing sites. PMID:23133354
NASA Astrophysics Data System (ADS)
Prabhu, Vijendra; Rao, Bola Sadashiva S.; Mahato, Krishna Kishore
2014-02-01
Investigations on the use of Low Level Laser Therapy (LLLT) for wound healing especially with the red laser light have demonstrated its pro-healing potential on a variety of pre-clinical and surgical wounds. However, until now, in LLLT the effect of multiple exposure of low dose laser irradiation on acute wound healing on well-designed pre-clinical model is not much explored. The present study aimed to investigate the effect of multiple exposure of low dose Helium Neon laser on healing progression of full thickness excision wounds in Swiss albino mice. Further, the efficacy of the multiple exposure of low dose laser irradiation was compared with the single exposure of optimum dose. Full thickness excision wounds (circular) of 15 mm diameter were created, and subsequently illuminated with the multiple exposures (1, 2, 3, 4 and 5 exposure/ week until healing) of He-Ne (632.8 nm, 4.02 mWcm-2) laser at 0.5 Jcm-2 along with single exposure of optimum laser dose (2 J/cm-2) and un-illuminated controls. Classical biophysical parameters such as contraction kinetics, area under the curve and the mean healing time were documented as the assessment parameters to examine the efficacy of multiple exposures with low level laser dose. Experimental findings substantiated that either single or multiple exposures of 0.5 J/cm2 failed to produce any detectable alterations on wound contraction, area under the curve and mean healing time compared to single exposure of optimum dose (2 Jcm-2) and un-illuminated controls. Single exposure of optimum, laser dose was found to be ideal for acute wound healing.
NASA-STD-(I)-6016, Standard Materials and Processes Requirements for Spacecraft
NASA Technical Reports Server (NTRS)
Pedley, Michael; Griffin, Dennis
2006-01-01
This document is directed toward Materials and Processes (M&P) used in the design, fabrication, and testing of flight components for all NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements. All flight hardware is covered by the M&P requirements of this document, including vendor designed, off-the-shelf, and vendor furnished items. Materials and processes used in interfacing ground support equipment (GSE); test equipment; hardware processing equipment; hardware packaging; and hardware shipment shall be controlled to prevent damage to or contamination of flight hardware.
Hardware development process for Human Research facility applications
NASA Astrophysics Data System (ADS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .
Hardware Development Process for Human Research Facility Applications
NASA Technical Reports Server (NTRS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. The source of hardware requirements is the science community and HRF program. The HRF Science Working Group, consisting of SCientists from various medical disciplines, defined a basic set of equipment with functional requirements. This established the performance requirements of the hardware. HRF program requirements focus on making the hardware safe and operational in a space environment. This includes structural, thermal, human factors, and material requirements. Science and HRF program requirements are defined in a hardware requirements document which includes verification methods. Once the hardware is fabricated, requirements are verified by inspection, test, analysis, or demonstration. All data is compiled and reviewed to certify the hardware for flight. Obviously, the basis for all hardware development activities is requirement definition. Full and complete requirement definition is ideal prior to initiating the hardware development. However, this is generally not the case, but the hardware team typically has functional inputs as a guide. The first step is for engineers to conduct market research based on the functional inputs provided by scientists. CommerCially available products are evaluated against the science requirements as well as modifications needed to meet program requirements. Options are consolidated and the hardware development team reaches a hardware development decision point. Within budget and schedule constraints, the team must decide whether or not to complete the hardware as an in-house, subcontract with vendor, or commercial-off-the-shelf (COTS) development. An in-house development indicates NASA personnel or a contractor builds the hardware at a NASA site. A subcontract development is completed off-site by a commercial company. A COTS item is a vendor product available by ordering a specific part number. The team evaluates the pros and cons of each development path. For example, in-bouse developments utilize existing corporate knowledge regarding bow to build equipment for use in space. However, technical expertise would be required to fully understand the medical equipment capabilities, such as for an ultrasound system. It may require additional time and funding to gain the expertise that commercially exists. The major benefit of subcontracting a hardware development is the product is delivered as an end-item and commercial expertise is utilized. On the other hand, NASA has limited control over schedule delays. The final option of COTS or modified COTS equipment is a compromise between in-house and subcontracts. A vendor product may exist that meets all functional requirements but req uires in-house modifications for successful operation in a space environment. The HRF utilizes equipment developed using all of the paths described: inhouse, subcontract, and modified COTS.
Summary of materials and hardware performance on LDEF
NASA Technical Reports Server (NTRS)
Dursch, Harry; Pippin, Gary; Teichman, Lou
1993-01-01
A wide variety of materials and experiment support hardware were flown on the Long Duration Exposure Facility (LDEF). Postflight testing has determined the effects of the almost 6 years of low-earth orbit (LEO) exposure on this hardware. An overview of the results are presented. Hardware discussed includes adhesives, fasteners, lubricants, data storage systems, solar cells, seals, and the LDEF structure. Lessons learned from the testing and analysis of LDEF hardware is also presented.
Ensuring a C2 Level of Trust and Interoperability in a Networked Windows NT Environment
1996-09-01
addition, it should be noted that the device drivers, microkernel , memory manager, and Hardware Abstraction Layer are all hardware dependent. a. The...Executive The executive is further divided into three conceptual layers which are referred to as-the Hardware Abstraction Layer (HAL), the Microkernel , and...Subsystem Executive Subsystems Manager I/O Manager Cache Manager File Systems Microkernel Device Driver Hardware Abstraction Layer F HARDWARE Figure 3
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
Simulation and optimum design of hybrid solar-wind and solar-wind-diesel power generation systems
NASA Astrophysics Data System (ADS)
Zhou, Wei
Solar and wind energy systems are considered as promising power generating sources due to its availability and topological advantages in local power generations. However, a drawback, common to solar and wind options, is their unpredictable nature and dependence on weather changes, both of these energy systems would have to be oversized to make them completely reliable. Fortunately, the problems caused by variable nature of these resources can be partially overcome by integrating these two resources in a proper combination to form a hybrid system. However, with the increased complexity in comparison with single energy systems, optimum design of hybrid system becomes more complicated. In order to efficiently and economically utilize the renewable energy resources, one optimal sizing method is necessary. This thesis developed an optimal sizing method to find the global optimum configuration of stand-alone hybrid (both solar-wind and solar-wind-diesel) power generation systems. By using Genetic Algorithm (GA), the optimal sizing method was developed to calculate the system optimum configuration which offers to guarantee the lowest investment with full use of the PV array, wind turbine and battery bank. For the hybrid solar-wind system, the optimal sizing method is developed based on the Loss of Power Supply Probability (LPSP) and the Annualized Cost of System (ACS) concepts. The optimization procedure aims to find the configuration that yields the best compromise between the two considered objectives: LPSP and ACS. The decision variables, which need to be optimized in the optimization process, are the PV module capacity, wind turbine capacity, battery capacity, PV module slope angle and wind turbine installation height. For the hybrid solar-wind-diesel system, minimization of the system cost is achieved not only by selecting an appropriate system configuration, but also by finding a suitable control strategy (starting and stopping point) of the diesel generator. The optimal sizing method was developed to find the system optimum configuration and settings that can achieve the custom-required Renewable Energy Fraction (fRE) of the system with minimum Annualized Cost of System (ACS). Du to the need for optimum design of the hybrid systems, an analysis of local weather conditions (solar radiation and wind speed) was carried out for the potential installation site, and mathematical simulation of the hybrid systems' components was also carried out including PV array, wind turbine and battery bank. By statistically analyzing the long-term hourly solar and wind speed data, Hong Kong area is found to have favorite solar and wind power resources compared with other areas, which validates the practical applications in Hong Kong and Guangdong area. Simulation of PV array performance includes three main parts: modeling of the maximum power output of the PV array, calculation of the total solar radiation on any tilted surface with any orientations, and PV module temperature predictions. Five parameters are introduced to account for the complex dependence of PV array performance upon solar radiation intensities and PV module temperatures. The developed simulation model was validated by using the field-measured data from one existing building-integrated photovoltaic system (BIPV) in Hong Kong, and good simulation performance of the model was achieved. Lead-acid batteries used in hybrid systems operate under very specific conditions, which often cause difficulties to predict when energy will be extracted from or supplied to the battery. In this thesis, the lead-acid battery performance is simulated by three different characteristics: battery state of charge (SOC), battery floating charge voltage and the expected battery lifetime. Good agreements were found between the predicted values and the field-measured data of a hybrid solar-wind project. At last, one 19.8kW hybrid solar-wind power generation project, designed by the optimal sizing method and set up to supply power for a telecommunication relay station on a remote island of Guangdong province, was studied. Simulation and experimental results about the operating performances and characteristics of the hybrid solar-wind project have demonstrated the feasibility and accuracy of the recommended optimal sizing method developed in this thesis.
Development and characteristics of the hardware for Skylab experiment S015
NASA Technical Reports Server (NTRS)
Thirolf, R. G.
1975-01-01
Details are given regarding the hardware for the Skylab S015 experiment, which was designed to detect the effects of zero gravity on cell growth rates. Experience gained in hardware-related considerations is presented for use of researchers concerned with future research of this type and further study of the S015 results. Brief descriptions are given of the experiment hardware, the hardware configuration for the critical design review, the major configuration changes, the final configuration, and the postflight review and analysis. An appendix describes pertinent documentation, film, and hardware that are available to qualified researchers; sources for additional or special information are given.
Use of CCSDS Packets Over SpaceWire to Control Hardware
NASA Technical Reports Server (NTRS)
Haddad, Omar; Blau, Michael; Haghani, Noosha; Yuknis, William; Albaijes, Dennis
2012-01-01
For the Lunar Reconnaissance Orbiter, the Command and Data Handling subsystem consisted of several electronic hardware assemblies that were connected with SpaceWire serial links. Electronic hardware would be commanded/controlled and telemetry data was obtained using the SpaceWire links. Prior art focused on parallel data buses and other types of serial buses, which were not compatible with the SpaceWire and the core flight executive (CFE) software bus. This innovation applies to anything that utilizes both SpaceWire networks and the CFE software. The CCSDS (Consultative Committee for Space Data Systems) packet contains predetermined values in its payload fields that electronic hardware attached at the terminus of the SpaceWire node would decode, interpret, and execute. The hardware s interpretation of the packet data would enable the hardware to change its state/configuration (command) or generate status (telemetry). The primary purpose is to provide an interface that is compatible with the hardware and the CFE software bus. By specifying the format of the CCSDS packet, it is possible to specify how the resulting hardware is to be built (in terms of digital logic) that results in a hardware design that can be controlled by the CFE software bus in the final application
Semiannual Technical Summary, 1 April-30 September 1993
1993-12-01
Hardware failure 11 Jul 2200 - Hardware failure 12 Jul - 0531 Hardware failure 12 Jul 0744 - 1307 Hardware service 1OAug 0821 - 1514 Line failure 29 Aug...1000 - Line failure 30 Aug - 1211 Line failure 08 Sep 1518 - Line failure 09 Sep - 0428 Line failure 10 Sep 0821 - 1030 Hardware failure 18 Sep 0817...reair. Between 8 September 1306 hrs and 9 September 0428 hre all communications systems wene affected (13.5 hrs). Reduced 01B performance started 10
Hardware and software reliability estimation using simulations
NASA Technical Reports Server (NTRS)
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
NASA Technical Reports Server (NTRS)
1999-01-01
The full complement of EDOMP investigations called for a broad spectrum of flight hardware ranging from commercial items, modified for spaceflight, to custom designed hardware made to meet the unique requirements of testing in the space environment. In addition, baseline data collection before and after spaceflight required numerous items of ground-based hardware. Two basic categories of ground-based hardware were used in EDOMP testing before and after flight: (1) hardware used for medical baseline testing and analysis, and (2) flight-like hardware used both for astronaut training and medical testing. To ensure post-landing data collection, hardware was required at both the Kennedy Space Center (KSC) and the Dryden Flight Research Center (DFRC) landing sites. Items that were very large or sensitive to the rigors of shipping were housed permanently at the landing site test facilities. Therefore, multiple sets of hardware were required to adequately support the prime and backup landing sites plus the Johnson Space Center (JSC) laboratories. Development of flight hardware was a major element of the EDOMP. The challenges included obtaining or developing equipment that met the following criteria: (1) compact (small size and light weight), (2) battery-operated or requiring minimal spacecraft power, (3) sturdy enough to survive the rigors of spaceflight, (4) quiet enough to pass acoustics limitations, (5) shielded and filtered adequately to assure electromagnetic compatibility with spacecraft systems, (6) user-friendly in a microgravity environment, and (7) accurate and efficient operation to meet medical investigative requirements.
Liquid Nitrogen Removal of Critical Aerospace Materials
NASA Technical Reports Server (NTRS)
Noah, Donald E.; Merrick, Jason; Hayes, Paul W.
2005-01-01
Identification of innovative solutions to unique materials problems is an every-day quest for members of the aerospace community. Finding a technique that will minimize costs, maximize throughput, and generate quality results is always the target. United Space Alliance Materials Engineers recently conducted such a search in their drive to return the Space Shuttle fleet to operational status. The removal of high performance thermal coatings from solid rocket motors represents a formidable task during post flight disassembly on reusable expended hardware. The removal of these coatings from unfired motors increases the complexity and safety requirements while reducing the available facilities and approved processes. A temporary solution to this problem was identified, tested and approved during the Solid Rocket Booster (SRB) return to flight activities. Utilization of ultra high-pressure liquid nitrogen (LN2) to strip the protective coating from assembled space shuttle hardware marked the first such use of the technology in the aerospace industry. This process provides a configurable stream of liquid nitrogen (LN2) at pressures of up to 55,000 psig. The performance of a one-time certification for the removal of thermal ablatives from SRB hardware involved extensive testing to ensure adequate material removal without causing undesirable damage to the residual materials or aluminum substrates. Testing to establish appropriate process parameters such as flow, temperature and pressures of the liquid nitrogen stream provided an initial benchmark for process testing. Equipped with these initial parameters engineers were then able to establish more detailed test criteria that set the process limits. Quantifying the potential for aluminum hardware damage represented the greatest hurdle for satisfying engineers as to the safety of this process. Extensive testing for aluminum erosion, surface profiling, and substrate weight loss was performed. This successful project clearly demonstrated that the liquid nitrogen jet possesses unique strengths that align remarkably well with the unusual challenges that space hardware and missile manufacturers face on a regular basis. Performance of this task within the confines of a critical manufacturing facility marks a milestone in advanced processing.
Philp, Helen; Durand, Alexane; De Vicente, Felipe
2018-06-01
Objectives This study aimed to define a safe corridor for 2.7 mm cortical sacroiliac screw insertion in the dorsal plane (craniocaudal direction) using radiography and CT, and in the transverse plane (dorsoventral direction) using CT in feline cadavers. A further aim was to compare the values obtained by CT with those previously reported by radiography in the transverse plane. Methods Thirteen pelvises were retrieved from feline cadavers and dissected to expose one of the articular surfaces of the sacrum. A 2.7 mm screw was placed in the sacrum to a depth of approximately 1 cm in each exposed articular surface. Dorsoventral radiography and CT scanning of each specimen were performed. Multiplanar reconstructions were performed to allow CT evaluation in both the dorsal and transverse planes. Calculations were made to find the maximum, minimum and optimum angles for screw placement in craniocaudal (radiography and CT) and dorsoventral (CT) directions when using a 2.7 mm cortical screw. Results Radiographic measurement showed a mean optimum craniocaudal angle of 106° (range 97-112°). The mean minimum angle was 95° (range 87-107°), whereas the mean maximum angle was 117° (108-124°). Measurement of the dorsal CT scan images showed a mean optimum craniocaudal angle of 101° (range 94-110°). The mean minimum angle was 90° (range 83-99°), whereas the mean maximum angle was 113° (104-125°). The transverse CT scan images showed a mean dorsoventral minimum angle of 103° (range 95-113°), mean maximum angle of 115° (104-125°) and mean optimum dorsoventral angle of 111° (102-119°). Conclusions and relevance An optimum craniocaudal angle of 101° is recommended for 2.7 mm cortical screw placement in the feline sacral body, with a safety margin between 99° and 104°. No single angle can be recommended in the dorsoventral direction and therefore preoperative measuring on individual cats using CT images is recommended to establish the ideal individual angle in the transverse plane.
Hardware design for the Autonomous Visibility Monitoring (AVM) observatory
NASA Technical Reports Server (NTRS)
Cowles, K.
1993-01-01
The hardware for the three Autonomous Visibility Monitoring (AVM) observatories was redesigned. Changes in hardware design include electronics components, weather sensors, and the telescope drive system. Operation of the new hardware is discussed, as well as some of its features. The redesign will allow reliable automated operation.
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 4 2011-10-01 2011-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 4 2014-10-01 2014-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 4 2012-10-01 2012-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
49 CFR 238.105 - Train electronic hardware and software safety.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 4 2013-10-01 2013-10-01 false Train electronic hardware and software safety. 238... and General Requirements § 238.105 Train electronic hardware and software safety. The requirements of this section apply to electronic hardware and software used to control or monitor safety functions in...
Door Hardware and Installations; Carpentry: 901894.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The curriculum guide outlines a course designed to provide instruction in the selection, preparation, and installation of hardware for door assemblies. The course is divided into five blocks of instruction (introduction to doors and hardware, door hardware, exterior doors and jambs, interior doors and jambs, and a quinmester post-test) totaling…
Hardware acceleration and verification of systems designed with hardware description languages (HDL)
NASA Astrophysics Data System (ADS)
Wisniewski, Remigiusz; Wegrzyn, Marek
2005-02-01
Hardware description languages (HDLs) allow creating bigger and bigger designs nowadays. The size of prototyped systems very often exceeds million gates. Therefore verification process of the designs takes several hours or even days. The solution for this problem can be solved by hardware acceleration of simulation.
Meteorological Sensor Array (MSA)-Phase I. Volume 3 (Pre-Field Campaign Sensor Calibration)
2015-07-01
turbulence impact of the WSMR solar array. 4) Designing , developing, testing , and evaluating integrated Data Acquisition System (DAS) hardware and...ARL-TR-7362 ● JULY 2015 US Army Research Laboratory Meteorological Sensor Array (MSA)–Phase I, Volume 3 (Pre-Field Campaign...NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by
Effects on Fuel Consumption and Diesel Engine Deposits from Nano-Particle Oil Additive
2010-07-01
Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...authorized documents. Trade names cited in this report do not constitute an official endorsement or approval of the use of such commercial hardware or...software. DTIC Availability Notice Qualified requestors may obtain copies of this report from the Defense Technical Information Center
Precision Guided Munitions: Constructing a Bomb More Potent Than the A-Bomb
2002-06-01
prototypes, cannibalization for spare parts throughout testing made it increasingly difficult to assemble an entire set of working hardware by the...individual American city finds itself under sporadic attack by a lone urban guerilla. Indeed, such an individual might well feel invulnerable as a...sniper in a crowded urban environment. In fact, law enforcement officers, using advanced technologies such as Lawrence Livermore’s Lifeguard system
NASA Technical Reports Server (NTRS)
Ellenberger, Richard; Duvall, Laura; Dory, Jonathan
2016-01-01
The ISS Payload Human Factors Implementation Team (HFIT) is the Payload Developer's resource for Human Factors. HFIT is the interface between Payload Developers and ISS Payload Human Factors requirements in SSP 57000. ? HFIT provides recommendations on how to meet the Human Factors requirements and guidelines early in the design process. HFIT coordinates with the Payload Developer and Astronaut Office to find low cost solutions to Human Factors challenges for hardware operability issues.
N-body simulations of star clusters
NASA Astrophysics Data System (ADS)
Engle, Kimberly Anne
1999-10-01
We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.
Speed challenge: a case for hardware implementation in soft-computing
NASA Technical Reports Server (NTRS)
Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.
2000-01-01
For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.
Open-source hardware for medical devices
2016-01-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device. PMID:27158528
Open-source hardware for medical devices.
Niezen, Gerrit; Eslambolchilar, Parisa; Thimbleby, Harold
2016-04-01
Open-source hardware is hardware whose design is made publicly available so anyone can study, modify, distribute, make and sell the design or the hardware based on that design. Some open-source hardware projects can potentially be used as active medical devices. The open-source approach offers a unique combination of advantages, including reducing costs and faster innovation. This article compares 10 of open-source healthcare projects in terms of how easy it is to obtain the required components and build the device.
Measuring the Impact of Business Rules on Inventory Balancing
2013-09-01
The Navy ERP system enables inventory to be redistributed across sites to help maintain optimum inventory levels. Holding too much inventory is...not unique to the Navy. In fact, the complexity of this problem is only magnified for competitive firms that are hesitant to share sensitive data with...lateral transshipment problems makes finding an analytical solution extremely difficult. The strength of simulation models lies within their ability
Use of eQTL Analysis for the Discovery of Target Genes Identified by GWAS
2014-04-01
technology. Cases having a RIN number of 7.0 or greater were considered good quality. Once completed, the optimum set of 500 samples were then selected for...AD_________________ Award Number: W81XWH-11-1-0261 TITLE: Use of eQTL Analysis for the Discovery...Distribution Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and
Morphology and Performance of Polymer Solar Cell Characterized by DPD Simulation and Graph Theory.
Du, Chunmiao; Ji, Yujin; Xue, Junwei; Hou, Tingjun; Tang, Jianxin; Lee, Shuit-Tong; Li, Youyong
2015-11-19
The morphology of active layers in the bulk heterojunction (BHJ) solar cells is critical to the performance of organic photovoltaics (OPV). Currently, there is limited information for the morphology from transmission electron microscopy (TEM) techniques. Meanwhile, there are limited approaches to predict the morphology /efficiency of OPV. Here we use Dissipative Particle Dynamics (DPD) to determine 3D morphology of BHJ solar cells and show DPD to be an efficient approach to predict the 3D morphology. Based on the 3D morphology, we estimate the performance indicator of BHJ solar cells by using graph theory. Specifically, we study poly (3-hexylthiophene)/[6, 6]-phenyl-C61butyric acid methyl ester (P3HT/PCBM) BHJ solar cells. We find that, when the volume fraction of PCBM is in the region 0.4 ∼ 0.5, P3HT/PCBM will show bi-continuous morphology and optimum performance, consistent with experimental results. Further, the optimum temperature (413 K) for the morphology and performance of P3HT/PCBM is in accord with annealing results. We find that solvent additive plays a critical role in the desolvation process of P3HT/PCBM BHJ solar cell. Our approach provides a direct method to predict dynamic 3D morphology and performance indicator for BHJ solar cells.
Optimizing Street Canyon Orientation for Rajarhat Newtown, Kolkata, India
NASA Astrophysics Data System (ADS)
De, Bhaskar; Mukherjee, Mahua
2017-12-01
Air temperature in urban street canyons is increased due to the morphed urban geometry, increased surface area, decreased long wave radiation and evapo-transpiration, different thermo-physical properties of surface materials and anthropogenic heat which results in thermal discomfort. Outdoor thermal stress can be mitigated substantially by properly orienting the canyons. It is crucial for the urban planners and designers to orient street canyons optimally considering variable local climatic context. It is important especially for cities in warm humid climatic context as these cities receive higher insolation with higher relative humidity and low level macro wind flow. This paper examines influence of canyon orientation on outdoor thermal comfort and proposes the optimum canyon orientation for the Rajarhat Newtown, Kolkata - a city in warm humid climate zone. Different scenarios are generated with different orientations. Change in air temperature, wind speed, Mean Radiant Temperature (MRT) and Physiological Equivalent Temperature (PET) of different scenarios are compared to find out the optimum orientation by parametric simulation in ENVI_met. Analysing the simulation results it is observed that orientation angle between 30°-60° to north performs the best for the study area of the Rajarhat Newtown. The findings of this research will be helpful for the planners to orient the street canyons optimally for future development and extension of the Rajarhat Newtown, Kolkata.
Optimization of diesel engine performance by the Bees Algorithm
NASA Astrophysics Data System (ADS)
Azfanizam Ahmad, Siti; Sunthiram, Devaraj
2018-03-01
Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.
Reilly, Danielle; Kamineni, Srinath
2016-01-01
Bursitis is a common medical condition, and of all the bursae in the body, the olecranon bursa is one of the most frequently affected. Bursitis at this location can be acute or chronic in timing and septic or aseptic. Distinguishing between septic and aseptic bursitis can be difficult, and the current literature is not clear on the optimum length or route of antibiotic treatment for septic cases. The current literature was reviewed to clarify these points. The reported data for olecranon bursitis were compiled from the current literature. The most common physical examination findings were tenderness (88% septic, 36% aseptic), erythema/cellulitis (83% septic, 27% aseptic), warmth (84% septic, 56% aseptic), report of trauma or evidence of a skin lesion (50% septic, 25% aseptic), and fever (38% septic, 0% aseptic). General laboratory data ranges were also summarized. Distinguishing between septic and aseptic olecranon bursitis can be difficult because the physical and laboratory data overlap. Evidence for the optimum length and route of antibiotic treatment for septic cases also differs. In this review we have presented the current data of offending bacteria, frequency of key physical examination findings, ranges of reported laboratory data, and treatment practices so that clinicians might have a better guide for treatment. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Finding stable cellulase and xylanase: evaluation of the synergistic effect of pH and temperature.
Farinas, Cristiane S; Loyo, Marcel Moitas; Baraldo, Anderson; Tardioli, Paulo W; Neto, Victor Bertucci; Couri, Sonia
2010-12-31
Ethanol from lignocellulosic biomass has been recognized as one of the most promising alternatives for the production of renewable and sustainable energy. However, one of the major bottlenecks holding back its commercialization is the high costs of the enzymes needed for biomass conversion. In this work, we studied the enzymes produced from a selected strain of Aspergillus niger under solid state fermentation. The cellulase and xylanase enzymatic cocktail was characterized in terms of pH and temperature by using response surface methodology. Thermostability and kinetic parameters were also determined. The statistical analysis of pH and temperature effects on enzymatic activity showed a synergistic interaction of these two variables, thus enabling to find a pH and temperature range in which the enzymes have a higher activity. The results obtained allowed the construction of mathematical models used to predict endoglucanase, β-glucosidase and xylanase activities under different pH and temperature conditions. Optimum temperature values for all three enzymes were found to be in the range between 35°C and 60°C, and the optimum pH range was found between 4 and 5.5. The methodology employed here was very effective in estimating enzyme behavior under different process conditions. Copyright © 2010 Elsevier B.V. All rights reserved.
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
Data to hardware binding with physical unclonable functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamlet, Jason
The various technologies presented herein relate to binding data (e.g., software) to hardware, wherein the hardware is to utilize the data. The generated binding can be utilized to detect whether at least one of the hardware or the data has been modified between an initial moment (enrollment) and a later moment (authentication). During enrollment, an enrollment value is generated that includes a signature of the data, a first response from a PUF located on the hardware, and a code word. During authentication, a second response from the PUF is utilized to authenticate any of the content in the enrollment value,more » and based upon the authentication, a determination can be made regarding whether the hardware and/or the data have been modified. If modification is detected then a mitigating operation can be performed, e.g., the hardware is prevented from utilizing the data. If no modification is detected, the data can be utilized.« less
Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator
Asaad, Sameth W.; Kapur, Mohit
2016-01-05
A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.
26 CFR 1.6050S-4 - Information reporting for payments of interest on qualified education loans.
Code of Federal Regulations, 2011 CFR
2011-04-01
... withdrawal of consent. (iii) Change in hardware or software requirements. If a change in the hardware or... access the statement, the furnisher must, prior to changing the hardware or software, provide the... inform the recipient of any change in the furnisher's contact information. (viii) Hardware and software...
Renewable Energy Generation and Storage Models | Grid Modernization | NREL
-the-loop testing Projects Generator, Plant, and Storage Modeling, Simulation, and Validation NREL power plants. Power Hardware-in-the-Loop Testing NREL researchers are developing software-and-hardware -combined simulation testing methods known as power hardware-in-the-loop testing. Power hardware in the loop
Code of Federal Regulations, 2010 CFR
2010-10-01
... customer bases for new commercial space hardware or services. 1812.7000 Section 1812.7000 Federal... PLANNING ACQUISITION OF COMMERCIAL ITEMS Commercial Space Hardware or Services 1812.7000 Prohibition on guaranteed customer bases for new commercial space hardware or services. Public Law 102-139, title III...
Issues Related to Large Flight Hardware Acoustic Qualification Testing
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Perry, Douglas C.; Kern, Dennis L.
2011-01-01
The characteristics of acoustical testing volumes generated by reverberant chambers or a circle of loudspeakers with and without large flight hardware within the testing volume are significantly different. The parameters attributing to these differences are normally not accounted for through analysis or acoustic tests prior to the qualification testing without the test hardware present. In most cases the control microphones are kept at least 2-ft away from hardware surfaces, chamber walls, and speaker surfaces to minimize the impact of the hardware in controlling the sound field. However, the acoustic absorption and radiation of sound by hardware surfaces may significantly alter the sound pressure field controlled within the chamber/speaker volume to a given specification. These parameters often result in an acoustic field that may provide under/over testing scenarios for flight hardware. In this paper the acoustic absorption by hardware surfaces will be discussed in some detail. A simple model is provided to account for some of the observations made from Mars Science Laboratory spacecraft that recently underwent acoustic qualification tests in a reverberant chamber.
NASA Technical Reports Server (NTRS)
Kirkpatrick, Paul D.; Trinchero, Jean-Pierre
2005-01-01
In order to support the International Space Station, as well as any future long term human missions, vast amounts of logistical-type hardware is required to be processed through the various launch sites. This category consists of such hardware as spare parts, replacement items, and upgraded hardware. The category also includes samples for experiments and consumables. One attribute that all these items have is they are generally non-hazardous, at least to ground personnel. Even though the items are non-hazardous, launch site ground safety has a responsibility for the protection of personnel, the flight hardware, and launch site resources. In order to fulfill this responsibility, the safety organization must have knowledge of the hardware and its operations. Conversely, the hardware providers are entitled to a process that is commensurate with the hazard. Additionally, a common system should be in place that is flexible enough to account for the requirements at all launch sites, so that, the hardware provider need only complete one process for ground safety regardless of the launch site.
NASA Technical Reports Server (NTRS)
Brown, K. L.; Bertsch, P. J.
1986-01-01
Results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Generation (EPG)/Fuel Cell Powerplant (FCP) hardware. The EPG/FCP hardware is required for performing functions of electrical power generation and product water distribution in the Orbiter. Specifically, the EPG/FCP hardware consists of the following divisions: (1) Power Section Assembly (PSA); (2) Reactant Control Subsystem (RCS); (3) Thermal Control Subsystem (TCS); and (4) Water Removal Subsystem (WRS). The IOA analysis process utilized available EPG/FCP hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
The optimum spanning catenary cable
NASA Astrophysics Data System (ADS)
Wang, C. Y.
2015-03-01
A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
2015-01-01
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726
Protocol Independent Adaptive Route Update for VANET
Rasheed, Asim; Qayyum, Amir
2014-01-01
High relative node velocity and high active node density have presented challenges to existing routing approaches within highly scaled ad hoc wireless networks, such as Vehicular Ad hoc Networks (VANET). Efficient routing requires finding optimum route with minimum delay, updating it on availability of a better one, and repairing it on link breakages. Current routing protocols are generally focused on finding and maintaining an efficient route, with very less emphasis on route update. Adaptive route update usually becomes impractical for dense networks due to large routing overheads. This paper presents an adaptive route update approach which can provide solution for any baseline routing protocol. The proposed adaptation eliminates the classification of reactive and proactive by categorizing them as logical conditions to find and update the route. PMID:24723807
NASA Technical Reports Server (NTRS)
Little, Alan; Bose, Deepak; Karlgaard, Chris; Munk, Michelle; Kuhl, Chris; Schoenenberger, Mark; Antill, Chuck; Verhappen, Ron; Kutty, Prasad; White, Todd
2013-01-01
The Mars Science Laboratory (MSL) Entry, Descent and Landing Instrumentation (MEDLI) hardware was a first-of-its-kind sensor system that gathered temperature and pressure readings on the MSL heatshield during Mars entry on August 6, 2012. MEDLI began as challenging instrumentation problem, and has been a model of collaboration across multiple NASA organizations. After the culmination of almost 6 years of effort, the sensors performed extremely well, collecting data from before atmospheric interface through parachute deploy. This paper will summarize the history of the MEDLI project and hardware development, including key lessons learned that can apply to future instrumentation efforts. MEDLI returned an unprecedented amount of high-quality engineering data from a Mars entry vehicle. We will present the performance of the 3 sensor types: pressure, temperature, and isotherm tracking, as well as the performance of the custom-built sensor support electronics. A key component throughout the MEDLI project has been the ground testing and analysis effort required to understand the returned flight data. Although data analysis is ongoing through 2013, this paper will reveal some of the early findings on the aerothermodynamic environment that MSL encountered at Mars, the response of the heatshield material to that heating environment, and the aerodynamic performance of the entry vehicle. The MEDLI data results promise to challenge our engineering assumptions and revolutionize the way we account for margins in entry vehicle design.
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
Water entered the Extravehicular Mobility Unit (EMU) helmet during extravehicular activity (EVA) no. 23 aboard the International Space Station on July 16, 2013, resulting in the termination of the EVA approximately 1 hour after it began. It was estimated that 1.5 liters of water had migrated up the ventilation loop into the helmet, adversely impacting the astronaut's hearing, vision, and verbal communication. Subsequent on-board testing and ground-based test, tear-down, and evaluation of the affected EMU hardware components determined that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator degassing function, which resulted in EMU cooling water spilling into the ventilation loop, migrating around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing shortcomings of the Airlock Cooling Loop Recovery (ALCLR) Ion Filter Beds, which led to various levels of contaminants being introduced into the filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware, and operational corrective actions that were implemented as a result of findings from this investigation.
NASA Technical Reports Server (NTRS)
Steele, John; Metselaar, Carol; Peyton, Barbara; Rector, Tony; Rossato, Robert; Macias, Brian; Weigel, Dana; Holder, Don
2015-01-01
During EVA (Extravehicular Activity) No. 23 aboard the ISS (International Space Station) on 07/16/2013 water entered the EMU (Extravehicular Mobility Unit) helmet resulting in the termination of the EVA (Extravehicular Activity) approximately 1-hour after it began. It was estimated that 1.5-L of water had migrated up the ventilation loop into the helmet, adversely impacting the astronauts hearing, vision and verbal communication. Subsequent on-board testing and ground-based TT and E (Test, Tear-down and Evaluation) of the affected EMU hardware components led to the determination that the proximate cause of the mishap was blockage of all water separator drum holes with a mixture of silica and silicates. The blockages caused a failure of the water separator function which resulted in EMU cooling water spilling into the ventilation loop, around the circulating fan, and ultimately pushing into the helmet. The root cause of the failure was determined to be ground-processing short-comings of the ALCLR (Airlock Cooling Loop Recovery) Ion Filter Beds which led to various levels of contaminants being introduced into the Filters before they left the ground. Those contaminants were thereafter introduced into the EMU hardware on-orbit during ALCLR scrubbing operations. This paper summarizes the failure analysis results along with identified process, hardware and operational corrective actions that were implemented as a result of findings from this investigation.
When enough is enough: The worth of monitoring data in aquifer remediation design
NASA Astrophysics Data System (ADS)
James, Bruce R.; Gorelick, Steven M.
1994-12-01
Given the high cost of data collection at groundwater contamination remediation sites, it is becoming increasingly important to make data collection as cost-effective as possible. A Bayesian data worth framework is developed in an attempt to carry out this task for remediation programs in which a groundwater contaminant plume must be located and then hydraulically contained. The framework is applied to a hypothetical contamination problem where uncertainty in plume location and extent are caused by uncertainty in source location, source loading time, and aquifer heterogeneity. The goal is to find the optimum number and the best locations for a sequence of observation wells that minimize the expected cost of remediation plus sampling. Simplifying assumptions include steady state heads, advective transport, simple retardation, and remediation costs as a linear function of discharge rate. In the case here, an average of six observation wells was needed. Results indicate that this optimum number was particularly sensitive to the mean hydraulic conductivity. The optimum number was also sensitive to the variance of the hydraulic conductivity, annual discount rate, operating cost, and sample unit cost. It was relatively insensitive to the correlation length of hydraulic conductivity. For the case here, points of greatest uncertainty in plume presence were on average poor candidates for sample locations, and randomly located samples were not cost-effective.
Use of geostatistics in planning optimum drilling program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghose S.
1989-08-01
Application of geostatistics in the natural resources industry is well established. In a typical process of estimation, the statistically dependent geological data are used to predict the characteristics of a deposit. The estimator used is the best linear unbiased estimator (or BLUE), and a numerical factor of confidence is also provided. The natural inhomogeneity and anisotropy of a deposit are also quantified with preciseness. Drilling is the most reliable way of obtaining data for mining and related industries. However, it is often difficult to decide what is the optimum number of drill holes necessary for evaluation. In this paper, sequentialmore » measures of percent variation at 95% confidence level of a geological variable have been used to decipher economically optimum drilling density. A coal reserve model has been used to illustrate the method and findings. Fictitious drilling data were added (within the domain of population characteristics) in stages, to obtain a point of stability, beyond which the gain was significant (diminishing marginal benefit). The final relations are established by graphically projecting and comparing two variables - cost and precision. By mapping the percent variation at each stage, the localized areas of discrepancies can be identified. These are the locations where additional drilling is needed. The system can be controlled if performed at progressive stages and the preciseness toward stability is monitored.« less
Parameter optimization and stretch enhancement of AISI 316 sheet using rapid prototyping technique
NASA Astrophysics Data System (ADS)
Moayedfar, M.; Rani, A. M.; Hanaei, H.; Ahmad, A.; Tale, A.
2017-10-01
Incremental sheet forming is a flexible manufacturing process which uses the indenter point-to-point force to shape the sheet metal workpiece into manufactured parts in batch production series. However, the problem sometimes arising from this process is the low plastic point in the stress-strain diagram of the material which leads the low stretching amount before ultra-tensile strain point. Hence, a set of experiments is designed to find the optimum forming parameters in this process for optimum sheet thickness distribution while both sides of the sheet are considered for the surface quality improvement. A five-axis high-speed CNC milling machine is employed to deliver the proper motion based on the programming system while the clamping system for holding the sheet metal was a blank mould. Finally, an electron microscope and roughness machine are utilized to evaluate the surface structure of final parts, illustrate any defect may cause during the forming process and examine the roughness of the final part surface accordingly. The best interaction between parameters is obtained with the optimum values which lead the maximum sheet thickness distribution of 4.211e-01 logarithmic elongation when the depth was 24mm with respect to the design. This study demonstrates that this rapid forming method offers an alternative solution for surface quality improvement of 65% avoiding the low probability of cracks and low probability of crystal structure changes.
NASA Astrophysics Data System (ADS)
Moon, Chang-Uk; Choi, Kwang-Hwan; Yoon, Jung-In; Kim, Young-Bok; Son, Chang-Hyo; Ha, Soo-Jung; Jeon, Min-Ju; An, Sang-Young; Lee, Joon-Hyuk
2018-04-01
In this study, to investigate the performance characteristics of vapor injection refrigeration system with an economizer at an intermediate pressure, the vapor injection refrigeration system was analyzed under various experiment conditions. As a result, the optimum design data of the vapor injection refrigeration system with an economizer were obtained. The findings from this study can be summarized as follows. The mass flow rate through the compressor increases with intermediate pressure. The compression power input showed an increasing trend under all the test conditions. The evaporation capacity increased and then decreased at the intermediate pressure, and as such, it became maximum at the given intermediate pressure. The increased mass flow rate of the by-passed refrigerant enhanced the evaporation capacity at the low medium pressure range, but the increased saturation temperature limited the subcooling degree of the liquid refrigerant after the application of the economizer when the intermediate pressure kept rising, and degenerated the evaporation capacity. The coefficient of performance (COP) increased and then decreased with respect to the intermediate pressures under all the experiment conditions. Nevertheless, there was an optimum intermediate pressure for the maximum COP under each experiment condition. Therefore, the optimum intermediate pressure in this study was found at -99.08 kPa, which is the theoretical standard medium pressure under all the test conditions.
Emittance measurements for optimum operation of the J-PARC RF-driven H{sup −} ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, A., E-mail: akira.ueno@j-parc.jp; Ohkoshi, K.; Ikegami, K.
2015-04-08
In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second stage requirements of an H{sup −} ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup −} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). The transverse emittances of the source were measured with various conditions to find out the optimum operation conditions minimizing the horizontal and vertical rms normalized emittances. The transverse emittances were most effectivelymore » reduced by operating the source with the plasma electrode temperature lower than 70°C. The optimum value of the cesium (Cs) density around the beam hole of the plasma electrode seems to be proportional to the plasma electrode temperature. The fine control of the Cs density is indispensable, since the emittances seem to increase proportionally to the excessiveness of the Cs density. Furthermore, the source should be operated with the Cs density beyond a threshold value, since the plasma meniscus shape and the ellipse parameters of the transverse emittances seem to be changed step-function-likely on the threshold Cs value.« less
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Pratt and Whitney Overview and Advanced Health Management Program
NASA Technical Reports Server (NTRS)
Inabinett, Calvin
2008-01-01
Hardware Development Activity: Design and Test Custom Multi-layer Circuit Boards for use in the Fault Emulation Unit; Logic design performed using VHDL; Layout power system for lab hardware; Work lab issues with software developers and software testers; Interface with Engine Systems personnel with performance of Engine hardware components; Perform off nominal testing with new engine hardware.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... Hardware and Software Components Thereof; Notice of Investigation AGENCY: U.S. International Trade... boxes, and hardware and software components thereof by reason of infringement of certain claims of U.S... after importation of certain set-top boxes, and hardware and software components thereof that infringe...
Energy Systems Integration Facility to Transform U.S. Energy Infrastructure
operations center. Fully integrated with hardware-in-the-loop at power capabilities, an experimental hardware- and systems-in-the-loop capability. Hardware-in-the-Loop at Power ESIF Snapshot Cost : $135M 2013 Hardware-in-the-loop simulation is not a new concept, but adding megawatt-scale power takes
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Independent Orbiter Assessment (IOA): Analysis of the pyrotechnics subsystem
NASA Technical Reports Server (NTRS)
Robinson, W. W.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Pyrotechnics hardware. The IOA analysis process utilized available pyrotechnics hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
An evaluation of Skylab habitability hardware
NASA Technical Reports Server (NTRS)
Stokes, J.
1974-01-01
For effective mission performance, participants in space missions lasting 30-60 days or longer must be provided with hardware to accommodate their personal needs. Such habitability hardware was provided on Skylab. Equipment defined as habitability hardware was that equipment composing the food system, water system, sleep system, waste management system, personal hygiene system, trash management system, and entertainment equipment. Equipment not specifically defined as habitability hardware but which served that function were the Wardroom window, the exercise equipment, and the intercom system, which was occasionally used for private communications. All Skylab habitability hardware generally functioned as intended for the three missions, and most items could be considered as adequate concepts for future flights of similar duration. Specific components were criticized for their shortcomings.
NASA Astrophysics Data System (ADS)
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
Venugopal, Paramaguru; Kasimani, Ramesh; Chinnasamy, Suresh
2018-06-21
The transportation demand in India is increasing tremendously, which arouses the energy consumption by 4.1 to 6.1% increases each year from 2010 to 2050. In addition, the private vehicle ownership keeps on increasing almost 10% per year during the last decade and reaches 213 million tons of oil consumption in 2016. Thus, this makes India the third largest importer of crude oil in the world. Because of this problem, there is a need of promoting the alternative fuels (biodiesel) which are from different feedstocks for the transportation. This alternative fuel has better emission characteristics compared to neat diesel, hence the biodiesel can be used as direct alternative for diesel and it can also be blended with diesel to get better performance. However, the effect of compression ratio, injection timing, injection pressure, composition-blend ratio and air-fuel ratio, and the shape of the cylinder may affect the performance and emission characteristics of the diesel engine. This article deals with the effect of compression ratio in the performance of the engine while using Honne oil diesel blend and also to find out the optimum compression ratio. So the experimentations are conducted using Honne oil diesel blend-fueled CI engine at variable load conditions and at constant speed operations. In order to find out the optimum compression ratio, experiments are carried out on a single-cylinder, four-stroke variable compression ratio diesel engine, and it is found that 18:1 compression ratio gives better performance than the lower compression ratios. Engine performance tests were carried out at different compression ratio values. Using experimental data, regression model was developed and the values were predicted using response surface methodology. Then the predicted values were validated with the experimental results and a maximum error percentage of 6.057 with an average percentage of error as 3.57 were obtained. The optimum numeric factors for different responses were also selected using RSM.
Aerospace Safety Advisory Panel
NASA Technical Reports Server (NTRS)
1992-01-01
The results of the Panel's activities are presented in a set of findings and recommendations. Highlighted here are both improvements in NASA's safety and reliability activities and specific areas where additional gains might be realized. One area of particular concern involves the curtailment or elimination of Space Shuttle safety and reliability enhancements. Several findings and recommendations address this area of concern, reflecting the opinion that safety and reliability enhancements are essential to the continued successful operation of the Space Shuttle. It is recommended that a comprehensive and continuing program of safety and reliability improvements in all areas of Space Shuttle hardware/software be considered an inherent component of ongoing Space Shuttle operations.
2014-11-19
John Mather Maniac Lecture, November 19, 2014 Nobel Laureate John Mather presented a Maniac Talk entitled "Creating the Future: Building JWST, what it may find, and what comes next?" In this lecture, John takes a rear view look at how James Webb Space Telescope was started, what it can see and what it might discover. He describes the hardware, what it was designed to observe, and speculate about the surprises it might uncover. He also outlines a possible future of space observatories: what astronomers want to build, what we need to invent, and what they might find, even the chance of discovering life on planets around other stars.
LETTER TO THE EDITOR: Optimization of partial search
NASA Astrophysics Data System (ADS)
Korepin, Vladimir E.
2005-11-01
A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.
A Gradient Taguchi Method for Engineering Optimization
NASA Astrophysics Data System (ADS)
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
Explore or Exploit? A Generic Model and an Exactly Solvable Case
NASA Astrophysics Data System (ADS)
Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe
2014-02-01
Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.
Explore or exploit? A generic model and an exactly solvable case.
Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe
2014-02-07
Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.
CHeCS: International Space Station Medical Hardware Catalog
NASA Technical Reports Server (NTRS)
2008-01-01
The purpose of this catalog is to provide a detailed description of each piece of hardware in the Crew Health Care System (CHeCS), including subpacks associated with the hardware, and to briefly describe the interfaces between the hardware and the ISS. The primary user of this document is the Space Medicine/Medical Operations ISS Biomedical Flight Controllers (ISS BMEs).
Systems Performance Laboratory | Energy Systems Integration Facility | NREL
array access Small Commercial Power Hardware in the Loop The small commercial power-hardware-in-the-loop (PHIL) test bay is dedicated to small-scale power hardware-in-the-loop studies of inverters and other , natural gas supply Multi-Inverter Power Hardware in the Loop The multi-inverter test bay is dedicated to
2007-12-01
Hardware - In - Loop , Piccolo, UAV, Unmanned Aerial Vehicle 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...Maneuvering Target.......................... 35 C. HARDWARE - IN - LOOP SIMULATION............................................... 37 1. Hardware - In - Loop Setup...law as proposed in equation (23) is capable of tracking a maneuvering target. C. HARDWARE - IN - LOOP SIMULATION The intention of HIL simulation
EVA Training and Development Facilities
NASA Technical Reports Server (NTRS)
Cupples, Scott
2016-01-01
Overview: Vast majority of US EVA (ExtraVehicular Activity) training and EVA hardware development occurs at JSC; EVA training facilities used to develop and refine procedures and improve skills; EVA hardware development facilities test hardware to evaluate performance and certify requirement compliance; Environmental chambers enable testing of hardware from as large as suits to as small as individual components in thermal vacuum conditions.
Independent Orbiter Assessment (IOA): Analysis of the DPS subsystem
NASA Technical Reports Server (NTRS)
Lowery, H. J.; Haufler, W. A.; Pietz, K. C.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis/Critical Items List (FMEA/CIL) is presented. The IOA approach features a top-down analysis of the hardware to independently determine failure modes, criticality, and potential critical items. The independent analysis results corresponding to the Orbiter Data Processing System (DPS) hardware are documented. The DPS hardware is required for performing critical functions of data acquisition, data manipulation, data display, and data transfer throughout the Orbiter. Specifically, the DPS hardware consists of the following components: Multiplexer/Demultiplexer (MDM); General Purpose Computer (GPC); Multifunction CRT Display System (MCDS); Data Buses and Data Bus Couplers (DBC); Data Bus Isolation Amplifiers (DBIA); Mass Memory Unit (MMU); and Engine Interface Unit (EIU). The IOA analysis process utilized available DPS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Due to the extensive redundancy built into the DPS the number of critical items are few. Those identified resulted from premature operation and erroneous output of the GPCs.
Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Alewine, Neal Jon
1993-01-01
Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.
NASA Technical Reports Server (NTRS)
Patton, Jeff A.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Precision Approach Radar Training System (PARTS) Training Effectiveness Evaluation
1980-08-01
as complex as PARTS, the many interactions between hardware and software could lead to such intermittent problems. Finding the sources of these...provided with the opportunity to excercise some training options such as selecting practice or review when they feel it necessary. This is not to suggest...Replay had a fast -forward". Because "Replay with Errors" is important for learning, students should not be discouraged from selecting it by the
NASA Technical Reports Server (NTRS)
Foster, Richard W.
1992-01-01
Extensively axisymmetric and non-axisymmetric Single Stage To Orbit (SSTO) vehicles are considered. The information is presented in viewgraph form and the following topics are presented: payload comparisons; payload as a percent of dry weight - a system hardware cost indicator; life cycle cost estimations; operations and support costs estimation; selected engine type; and rocket engine specific impulse calculation.
Space telescope neutral buoyancy simulations: The first two years
NASA Technical Reports Server (NTRS)
Sanders, F. G.
1982-01-01
Neutral Buoyancy simulations which were conducted to validate the crew systems interface as it relates to space telescope on-orbit maintenance and contingency operations is discussed. The initial concept validation tests using low fidelity mockups is described. The entire spectrum of proposed space telescope refurbishment and selected contingencies using upgraded mockups which reflect flight hardware are reported. Findings which may be applicable to future efforts of a similar nature are presented.
Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem
NASA Technical Reports Server (NTRS)
Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanthorn, H.E.; Jaech, J.L.
Results are given of a study to determine the optimum testing scheme consisting of drawing a group of optimum size from the population being tested, and retesting it, if required, in subgroups of optimum size. An exact computation of optimum grouping and subgrouping was made. Results are also given to indicate how much loss inefficiency occurs when physical limitations restrict the size of the original group. (J.R.D.)
FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine
NASA Astrophysics Data System (ADS)
Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo
Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.
Independent Orbiter Assessment (IOA): Analysis of the communication and tracking subsystem
NASA Technical Reports Server (NTRS)
Gardner, J. R.; Robinson, W. M.; Trahan, W. H.; Daley, E. S.; Long, W. C.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Communication and Tracking hardware. The IOA analysis process utilized available Communication and Tracking hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali
Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less
NASA Astrophysics Data System (ADS)
Tsai, Chun-Wei; Wang, Chen; Lyu, Bo-Han; Chu, Chen-Hsien
2017-08-01
Digital Electro-optics Platform is the main concept of Jasper Display Corp. (JDC) to develop various applications. These applications are based on our X-on-Silicon technologies, for example, X-on-Silicon technologies could be used on Liquid Crystal on Silicon (LCoS), Micro Light-Emitting Diode on Silicon (μLEDoS), Organic Light-Emitting Diode on Silicon (OLEDoS), and Cell on Silicon (CELLoS), etc. LCoS technology is applied to Spatial Light Modulator (SLM), Dynamic Optics, Wavelength Selective Switch (WSS), Holographic Display, Microscopy, Bio-tech, 3D Printing and Adaptive Optics, etc. In addition, μLEDoS technology is applied to Augmented Reality (AR), Head Up Display (HUD), Head-mounted Display (HMD), and Wearable Devices. Liquid Crystal on Silicon - Spatial Light Modulator (LCoSSLM) based on JDC's On-Silicon technology for both amplitude and phase modulation, have an expanding role in several optical areas where light control on a pixel-by-pixel basis is critical for optimum system performance. Combination of the advantage of hardware and software, we can establish a "dynamic optics" for the above applications or more. Moreover, through the software operation, we can control the light more flexible and easily as programmable light processor.
Use of high-radiant flux, high-resolution DMD light engines in industrial applications
NASA Astrophysics Data System (ADS)
Müller, Alexandra; Ram, Surinder
2014-03-01
The field of application of industrial projectors is growing day by day. New Digital Micromirror Device (DMD) - based applications like 3D printing, 3D scanning, Printed Circuit Board (PCB) board printing and others are getting more and more sophisticated. The technical demands for the projection system are rising as new and more stringent requirements appear. The specification for industrial projection systems differ substantially from the ones of business and home beamers. Beamers are designed to please the human eye. Bright colors and image enhancement are far more important than uniformity of the illumination or image distortion. The human eye, followed by the processing of the brain can live with quite high intensity variations on the screen and image distortion. On the other hand, a projector designed for use in a specialized field has to be tailored regarding its unique requirements in order to make no quality compromises. For instance, when the image is projected onto a light sensitive resin, a good uniformity of the illumination is crucial for good material hardening (curing) results. The demands on the hardware and software are often very challenging. In the following we will review some parameters that have to be considered carefully for the design of industrial projectors in order to get the optimum result without compromises.
APNEA list mode data acquisition and real-time event processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogle, R.A.; Miller, P.; Bramblett, R.L.
1997-11-01
The LMSC Active Passive Neutron Examinations and Assay (APNEA) Data Logger is a VME-based data acquisition system using commercial-off-the-shelf hardware with the application-specific software. It receives TTL inputs from eighty-eight {sup 3}He detector tubes and eight timing signals. Two data sets are generated concurrently for each acquisition session: (1) List Mode recording of all detector and timing signals, timestamped to 3 microsecond resolution; (2) Event Accumulations generated in real-time by counting events into short (tens of microseconds) and long (seconds) time bins following repetitive triggers. List Mode data sets can be post-processed to: (1) determine the optimum time bins formore » TRU assay of waste drums, (2) analyze a given data set in several ways to match different assay requirements and conditions and (3) confirm assay results by examining details of the raw data. Data Logger events are processed and timestamped by an array of 15 TMS320C40 DSPs and delivered to an embedded controller (PowerPC604) for interim disk storage. Three acquisition modes, corresponding to different trigger sources are provided. A standard network interface to a remote host system (Windows NT or SunOS) provides for system control, status, and transfer of previously acquired data. 6 figs.« less
System analysis of vehicle active safety problem
NASA Astrophysics Data System (ADS)
Buznikov, S. E.
2018-02-01
The problem of the road transport safety affects the vital interests of the most of the population and is characterized by a global level of significance. The system analysis of problem of creation of competitive active vehicle safety systems is presented as an interrelated complex of tasks of multi-criterion optimization and dynamic stabilization of the state variables of a controlled object. Solving them requires generation of all possible variants of technical solutions within the software and hardware domains and synthesis of the control, which is close to optimum. For implementing the task of the system analysis the Zwicky “morphological box” method is used. Creation of comprehensive active safety systems involves solution of the problem of preventing typical collisions. For solving it, a structured set of collisions is introduced with its elements being generated also using the Zwicky “morphological box” method. The obstacle speed, the longitudinal acceleration of the controlled object and the unpredictable changes in its movement direction due to certain faults, the road surface condition and the control errors are taken as structure variables that characterize the conditions of collisions. The conditions for preventing typical collisions are presented as inequalities for physical variables that define the state vector of the object and its dynamic limits.
A cost assessment of reliability requirements for shuttle-recoverable experiments
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1975-01-01
The relaunching of unsuccessful experiments or satellites will become a real option with the advent of the space shuttle. An examination was made of the cost effectiveness of relaxing reliability requirements for experiment hardware by allowing more than one flight of an experiment in the event of its failure. Any desired overall reliability or probability of mission success can be acquired by launching an experiment with less reliability two or more times if necessary. Although this procedure leads to uncertainty in total cost projections, because the number of flights is not known in advance, a considerable cost reduction can sometimes be achieved. In cases where reflight costs are low relative to the experiment's cost, three flights with overall reliability 0.9 can be made for less than half the cost of one flight with a reliability of 0.9. An example typical of shuttle payload cost projections is cited where three low reliability flights would cost less than $50 million and a single high reliability flight would cost over $100 million. The ratio of reflight cost to experiment cost is varied and its effect on the range in total cost is observed. An optimum design reliability selection criterion to minimize expected cost is proposed, and a simple graphical method of determining this reliability is demonstrated.
NASA Astrophysics Data System (ADS)
Leewe, R.; Shahriari, Z.; Moallem, M.
2017-10-01
Control of the natural resonance frequency of an RF cavity is essential for accelerator structures due to their high cavity sensitivity to internal and external vibrations and the dependency of resonant frequency on temperature changes. Due to the relatively high radio frequencies involved (MHz to GHz), direct measurement of the resonant frequency for real-time control is not possible by using conventional microcontroller hardware. So far, all operational cavities are tuned using phase comparison techniques. The temperature dependent phase measurements render this technique labor and time intensive. To eliminate the phase measurement, reduce man hours and speed up cavity start up time, this paper presents a control theme that relies solely on the reflected power measurement. The control algorithm for the nonlinear system is developed through Lyapunov's method. The controller stabilizes the resonance frequency of the cavity using a nonlinear control algorithm in combination with a gradient estimation method. Experimental results of the proposed system on a test cavity show that the resonance frequency can be tuned to its optimum operating point while the start up time of a single cavity and the accompanied man hours are significantly decreased. A test result of the fully commissioned control system on one of TRIUMF's DTL tanks verifies its performance under real environmental conditions.
Design and development of data acquisition system based on WeChat hardware
NASA Astrophysics Data System (ADS)
Wang, Zhitao; Ding, Lei
2018-06-01
Data acquisition system based on WeChat hardware provides methods for popularization and practicality of data acquisition. The whole system is based on WeChat hardware platform, where the hardware part is developed on DA14580 development board and the software part is based on Alibaba Cloud. We designed service module, logic processing module, data processing module and database module. The communication between hardware and software uses AirSync Protocal. We tested this system by collecting temperature and humidity data, and the result shows that the system can aquisite the temperature and humidity in real time according to settings.
New Approaches in Force-Limited Vibration Testing of Flight Hardware
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Kern, Dennis L.
2012-01-01
To qualify flight hardware for random vibration environments the following methods are used to limit the loads in the aerospace industry: (1) Response limiting and notching (2) Simple TDOF model (3) Semi-empirical force limits (4) Apparent mass, etc. and (5) Impedance method. In all these methods attempts are made to remove conservatism due to the mismatch in impedances between the test and the flight configurations of the hardware that are being qualified. Assumption is the hardware interfaces have correlated responses. A new method that takes into account the un-correlated hardware interface responses are described in this presentation.
The Art of Space Flight Exercise Hardware: Design and Implementation
NASA Technical Reports Server (NTRS)
Beyene, Nahom M.
2004-01-01
The design of space flight exercise hardware depends on experience with crew health maintenance in a microgravity environment, history in development of flight-quality exercise hardware, and a foundation for certifying proper project management and design methodology. Developed over the past 40 years, the expertise in designing exercise countermeasures hardware at the Johnson Space Center stems from these three aspects of design. The medical community has steadily pursued an understanding of physiological changes in humans in a weightless environment and methods of counteracting negative effects on the cardiovascular and musculoskeletal system. The effects of weightlessness extend to the pulmonary and neurovestibular system as well with conditions ranging from motion sickness to loss of bone density. Results have shown losses in water weight and muscle mass in antigravity muscle groups. With the support of university-based research groups and partner space agencies, NASA has identified exercise to be the primary countermeasure for long-duration space flight. The history of exercise hardware began during the Apollo Era and leads directly to the present hardware on the International Space Station. Under the classifications of aerobic and resistive exercise, there is a clear line of development from the early devices to the countermeasures hardware used today. In support of all engineering projects, the engineering directorate has created a structured framework for project management. Engineers have identified standards and "best practices" to promote efficient and elegant design of space exercise hardware. The quality of space exercise hardware depends on how well hardware requirements are justified by exercise performance guidelines and crew health indicators. When considering the microgravity environment of the device, designers must consider performance of hardware separately from the combined human-in-hardware system. Astronauts are the caretakers of the hardware while it is deployed and conduct all sanitization, calibration, and maintenance for the devices. Thus, hardware designs must account for these issues with a goal of minimizing crew time on orbit required to complete these tasks. In the future, humans will venture to Mars and exercise countermeasures will play a critical role in allowing us to continue in our spirit of exploration. NASA will benefit from further experimentation on Earth, through the International Space Station, and with advanced biomechanical models to quantify how each device counteracts specific symptoms of weightlessness. With the continued support of international space agencies and the academic research community, we will usher the next frontier in human space exploration.
Independent Orbiter Assessment (IOA): FMEA/CIL assessment
NASA Technical Reports Server (NTRS)
Saiidi, Mo J.; Swain, L. J.; Compton, J. M.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. Direction was given by the Orbiter and GFE Projects Office to perform the hardware analysis and assessment using the instructions and ground rules defined in NSTS 22206. The IOA analysis features a top-down approach to determine hardware failure modes, criticality, and potential critical items. To preserve independence, the anlaysis was accomplished without reliance upon the results contained within the NASA and prime contractor FMEA/CIL documentation. The assessment process compares the independently derived failure modes and criticality assignments to the proposed NASA Post 51-L FMEA/CIL documentation. When possible, assessment issues are discussed and resolved with the NASA subsystem managers. The assessment results for each subsystem are summarized. The most important Orbiter assessment finding was the previously unknown stuck autopilot push-button criticality 1/1 failure mode, having a worst case effect of loss of crew/vehicle when a microwave landing system is not active.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
ICE System: Interruptible control expert system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Vezina, James M.
1990-01-01
The Interruptible Control Expert (ICE) System is based on an architecture designed to provide a strong foundation for real-time production rule expert systems. Three principles are adopted to guide the development of ICE. A practical delivery platform must be provided, no specialized hardware can be used to solve deficiencies in the software design. Knowledge of the environment and the rule-base is exploited to improve the performance of a delivered system. The third principle of ICE is to respond to the most critical event, at the expense of the more trivial tasks. Minimal time is spent on classifying the potential importance of environmental events with the majority of the time used for finding the responses. A feature of the system, derived from all three principles, is the lack of working memory. By using a priori information, a fixed amount of memory can be specified for the hardware platform. The absence of working memory removes the dangers of garbage collection during the continuous operation of the controller.
Using and Distributing Spaceflight Data: The Johnson Space Center Life Sciences Data Archive
NASA Technical Reports Server (NTRS)
Cardenas, J. A.; Buckey, J. C.; Turner, J. N.; White, T. S.; Havelka,J. A.
1995-01-01
Life sciences data collected before, during and after spaceflight are valuable and often irreplaceable. The Johnson Space Center Life is hard to find, and much of the data (e.g. Sciences Data Archive has been designed to provide researchers, engineers, managers and educators interactive access to information about and data from human spaceflight experiments. The archive system consists of a Data Acquisition System, Database Management System, CD-ROM Mastering System and Catalog Information System (CIS). The catalog information system is the heart of the archive. The CIS provides detailed experiment descriptions (both written and as QuickTime movies), hardware descriptions, hardware images, documents, and data. An initial evaluation of the archive at a scientific meeting showed that 88% of those who evaluated the catalog want to use the system when completed. The majority of the evaluators found the archive flexible, satisfying and easy to use. We conclude that the data archive effectively provides key life sciences data to interested users.