Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)
2007-04-01
weight will be reduced by replacing heavy harness assemblies and FADECs , with distributed processing elements interconnected. This paper reviews...Digital Electronic Controls ( FADECs ), with distributed processing elements interconnected through a serial bus. Efficient data flow throughout the...because intelligence is embedded in components while overall control is maintained in the FADEC . The need for Distributed Control Systems in
Space vehicle electrical power processing distribution and control study. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Krausz, A.
1972-01-01
A concept for the processing, distribution, and control of electric power for manned space vehicles and future aircraft is presented. Emphasis is placed on the requirements of the space station and space shuttle configurations. The systems involved are referred to as the processing distribution and control system (PDCS), electrical power system (EPS), and electric power generation system (EPGS).
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
NASA Astrophysics Data System (ADS)
Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.
2017-10-01
The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.
1983-06-01
LOSARDO Project Engineer APPROVED: .MARMCINIhI, Colonel. USAF Chief, Coaud and Control Division FOR THE CCOaIDKR: Acting Chief, Plea Off ice * **711...WORK UNIT NUMBERS General Dynamics Corporation 62702F Data Systems Division P 0 Box 748, Fort Worth TX 76101 55811829 I1. CONTROLLING OFFICE NAME AND...Processing System for 29 the Operation/Direction Center(s) 4-3 Distribution of Processing Control 30 for the Operation/Direction Center(s) 4-4 Generalized
Economic design of control charts considering process shift distributions
NASA Astrophysics Data System (ADS)
Vommi, Vijayababu; Kasarapu, Rukmini V.
2014-09-01
Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.
Electric power processing, distribution and control for advanced aerospace vehicles.
NASA Technical Reports Server (NTRS)
Krausz, A.; Felch, J. L.
1972-01-01
The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.
Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Astrophysics Data System (ADS)
El Labban, A.; Mousseau, P.; Bailleul, J. L.; Deterre, R.
2007-04-01
Although numerical simulation has proved to be a useful tool to predict the rubber vulcanization process, few applications in the process control have been reported. Because the end-use rubber properties depend on the state of cure distribution in the parts thickness, the prediction of the optimal distribution remains a challenge for the rubber industry. The analysis of the vulcanization process requires the determination of the thermal behavior of the material and the cure kinetics. A nonisothermal vulcanization model with nonisothermal induction time is used in this numerical study. Numerical results are obtained for natural rubber (NR) thick-section part curing. A controlled gradient of the state of cure in the part thickness is obtained by a curing process that consists not only in mold heating phase, but also a forced convection mold cooling phase in order to stop the vulcanization process and to control the vulcanization distribution. The mold design that allows this control is described. In the heating phase, the state of cure is mainly controlled by the chemical kinetics (the induction time), but in the cooling phase, it is the heat diffusion that controls the state of cure distribution. A comparison among different cooling conditions is shown and a good state of cure gradient control is obtained.
Cetinceviz, Yucel; Bayindir, Ramazan
2012-05-01
The network requirements of control systems in industrial applications increase day by day. The Internet based control system and various fieldbus systems have been designed in order to meet these requirements. This paper describes an Internet based control system with wireless fieldbus communication designed for distributed processes. The system was implemented as an experimental setup in a laboratory. In industrial facilities, the process control layer and the distance connection of the distributed control devices in the lowest levels of the industrial production environment are provided with fieldbus networks. In this paper, the Internet based control system that will be able to meet the system requirements with a new-generation communication structure, which is called wired/wireless hybrid system, has been designed on field level and carried out to cover all sectors of distributed automation, from process control, to distributed input/output (I/O). The system has been accomplished by hardware structure with a programmable logic controller (PLC), a communication processor (CP) module, two industrial wireless modules and a distributed I/O module, Motor Protection Package (MPP) and software structure with WinCC flexible program used for the screen of Scada (Supervisory Control And Data Acquisition), SIMATIC MANAGER package program ("STEP7") used for the hardware and network configuration and also for downloading control program to PLC. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.
1990-01-01
Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.
Suboptimal distributed control and estimation: application to a four coupled tanks system
NASA Astrophysics Data System (ADS)
Orihuela, Luis; Millán, Pablo; Vivas, Carlos; Rubio, Francisco R.
2016-06-01
The paper proposes an innovative estimation and control scheme that enables the distributed monitoring and control of large-scale processes. The proposed approach considers a discrete linear time-invariant process controlled by a network of agents that may both collect information about the evolution of the plant and apply control actions to drive its behaviour. The problem makes full sense when local observability/controllability is not assumed and the communication between agents can be exploited to reach system-wide goals. Additionally, to reduce agents bandwidth requirements and power consumption, an event-based communication policy is studied. The design procedure guarantees system stability, allowing the designer to trade-off performance, control effort and communication requirements. The obtained controllers and observers are implemented in a fully distributed fashion. To illustrate the performance of the proposed technique, experimental results on a quadruple-tank process are provided.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
Distributed Aerodynamic Sensing and Processing Toolbox
NASA Technical Reports Server (NTRS)
Brenner, Martin; Jutte, Christine; Mangalam, Arun
2011-01-01
A Distributed Aerodynamic Sensing and Processing (DASP) toolbox was designed and fabricated for flight test applications with an Aerostructures Test Wing (ATW) mounted under the fuselage of an F-15B on the Flight Test Fixture (FTF). DASP monitors and processes the aerodynamics with the structural dynamics using nonintrusive, surface-mounted, hot-film sensing. This aerodynamic measurement tool benefits programs devoted to static/dynamic load alleviation, body freedom flutter suppression, buffet control, improvement of aerodynamic efficiency through cruise control, supersonic wave drag reduction through shock control, etc. This DASP toolbox measures local and global unsteady aerodynamic load distribution with distributed sensing. It determines correlation between aerodynamic observables (aero forces) and structural dynamics, and allows control authority increase through aeroelastic shaping and active flow control. It offers improvements in flutter suppression and, in particular, body freedom flutter suppression, as well as aerodynamic performance of wings for increased range/endurance of manned/ unmanned flight vehicles. Other improvements include inlet performance with closed-loop active flow control, and development and validation of advanced analytical and computational tools for unsteady aerodynamics.
A comparison of decentralized, distributed, and centralized vibro-acoustic control.
Frampton, Kenneth D; Baumann, Oliver N; Gardonio, Paolo
2010-11-01
Direct velocity feedback control of structures is well known to increase structural damping and thus reduce vibration. In multi-channel systems the way in which the velocity signals are used to inform the actuators ranges from decentralized control, through distributed or clustered control to fully centralized control. The objective of distributed controllers is to exploit the anticipated performance advantage of the centralized control while maintaining the scalability, ease of implementation, and robustness of decentralized control. However, and in seeming contradiction, some investigations have concluded that decentralized control performs as well as distributed and centralized control, while other results have indicated that distributed control has significant performance advantages over decentralized control. The purpose of this work is to explain this seeming contradiction in results, to explore the effectiveness of decentralized, distributed, and centralized vibro-acoustic control, and to expand the concept of distributed control to include the distribution of the optimization process and the cost function employed.
NASA Astrophysics Data System (ADS)
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
NASA Astrophysics Data System (ADS)
Musdalifah, N.; Handajani, S. S.; Zukhronah, E.
2017-06-01
Competition between the homoneous companies cause the company have to keep production quality. To cover this problem, the company controls the production with statistical quality control using control chart. Shewhart control chart is used to normal distributed data. The production data is often non-normal distribution and occured small process shift. Grand median control chart is a control chart for non-normal distributed data, while cumulative sum (cusum) control chart is a sensitive control chart to detect small process shift. The purpose of this research is to compare grand median and cusum control charts on shuttlecock weight variable in CV Marjoko Kompas dan Domas by generating data as the actual distribution. The generated data is used to simulate multiplier of standard deviation on grand median and cusum control charts. Simulation is done to get average run lenght (ARL) 370. Grand median control chart detects ten points that out of control, while cusum control chart detects a point out of control. It can be concluded that grand median control chart is better than cusum control chart.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2000-01-01
The p2d2 project at NAS has built a debugger for applications running on heterogeneous computational grids. It employs a client-server architecture to simplify the implementation. Its user interface has been designed to provide process control and state examination functions on a computation containing a large number of processes. It can find processes participating in distributed computations even when those processes were not created under debugger control. These process identification techniques work both on conventional distributed executions as well as those on a computational grid.
NASA Astrophysics Data System (ADS)
Zabala, M. E.; Manzano, M.; Vives, L.
2016-10-01
Groundwater in the upper 50 m of the Pampeano Aquifer in the Del Azul Creek basin (Argentina) has F and As contents above the WHO safe drinking levels. This basin is situated to the SE of the Chaco-Pampean plain, in Buenos Aires Province. The Pampeano Aquifer is a major water source for all uses. The aim of the study is to assess the primary processes controlling the regional distribution of F and As in the most exploited part of the aquifer. The study involved sampling for chemical and isotopic analyses, interpretation of data with different methods (diagrams, bivariate analyses, mineral saturation states, Principal Component Analysis) and deduction of leading processes. Information about aquifer mineralogy and hydrogeochemical processes involved in F and As solubilization in the aquifer has been taken from previous works of the same and other authors. Groundwater salinity increases to the NE, in the direction of the regional groundwater flow. Chemical types evolve from Ca/Mg-HCO3 in the upper part of the basin, to Na-HCO3 in the middle part and to Na-ClSO4 and Na-Cl in the lower part. The regional distribution of F is controlled by hydrogeochemical processes. The distribution of As is controlled by two types of processes dominating in different areas: hydrogeochemical controls prevail in the low to moderately mineralized groundwater of the middle and lower parts of the basin; hydrogeological controls lead to the NE of the lower basin and beyond it. In the last zone there are abundant lagoons and seasonal flooding is frequent, making evapoconcentration an important process for groundwater mineralization. The main hydrogeochemical processes involved in both F and As distribution are cation exchange, with Na release and Ca uptake, carbonate dissolution and pH increase. Arsenic release induced by redox processes may play to the NE, but its results would be masked by the effect of evaporation.
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.
Caballero Morales, Santiago Omar
2013-01-01
The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Distributed systems status and control
NASA Technical Reports Server (NTRS)
Kreidler, David; Vickers, David
1990-01-01
Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.
Hierarchical charge distribution controls self-assembly process of silk in vitro
NASA Astrophysics Data System (ADS)
Zhang, Yi; Zhang, Cencen; Liu, Lijie; Kaplan, David L.; Zhu, Hesun; Lu, Qiang
2015-12-01
Silk materials with different nanostructures have been developed without the understanding of the inherent transformation mechanism. Here we attempt to reveal the conversion road of the various nanostructures and determine the critical regulating factors. The regulating conversion processes influenced by a hierarchical charge distribution were investigated, showing different transformations between molecules, nanoparticles and nanofibers. Various repulsion and compressive forces existed among silk fibroin molecules and aggregates due to the exterior and interior distribution of charge, which further controlled their aggregating and deaggregating behaviors and finally formed nanofibers with different sizes. Synergistic action derived from molecular mobility and concentrations could also tune the assembly process and final nanostructures. It is suggested that the complicated silk fibroin assembly processes comply a same rule based on charge distribution, offering a promising way to develop silk-based materials with designed nanostructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos... the manufacture, import, processing, or distribution in commerce of asbestos-containing products in...
Diffusion of Siderophile Elements in Fe Metal: Application to Zoned Metal Grains in Chondrites
NASA Technical Reports Server (NTRS)
Righter, K.; Campbell, A. J.; Humajun, M.
2003-01-01
The distribution of highly siderophile elements (HSE) in planetary materials is controlled mainly by metal. Diffusion processes can control the distribution or re-distribution of these elements within metals, yet there is little systematic or appropriate diffusion data that can be used to interpret HSE concentrations in such metals. Because our understanding of isotope chronometry, redox processes, kamacite/taenite-based cooling rates, and metal grain zoning would be enhanced with diffusion data, we have measured diffusion coefficients for Ni, Co, Ga, Ge, Ru, Pd, Ir and Au in Fe metal from 1200 to 1400 C and 1 bar and 10 kbar. These new data on refractory and volatile siderophile elements are used to evaluate the role of diffusional processes in controlling zoning patterns in metal-rich chondrites.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
NASA Astrophysics Data System (ADS)
Chen, Ruey-Shun; Tsai, Yung-Shun; Tu, Arthur
In this study we propose a manufacturing control framework based on radio-frequency identification (RFID) technology and a distributed information system to construct a mass-customization production process in a loosely coupled shop-floor control environment. On the basis of this framework, we developed RFID middleware and an integrated information system for tracking and controlling the manufacturing process flow. A bicycle manufacturer was used to demonstrate the prototype system. The findings of this study were that the proposed framework can improve the visibility and traceability of the manufacturing process as well as enhance process quality control and real-time production pedigree access. Using this framework, an enterprise can easily integrate an RFID-based system into its manufacturing environment to facilitate mass customization and a just-in-time production model.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos..., importation, processing, and distribution in commerce of the asbestos-containing products identified and at...
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
Fault-Tolerant Signal Processing Architectures with Distributed Error Control.
1985-01-01
Zm, Revisited," Information and Control, Vol. 37, pp. 100-104, 1978. 13. J. Wakerly , Error Detecting Codes. SeIf-Checkino Circuits and Applications ...However, the newer results concerning applications of real codes are still in the publication process. Hence, two very detailed appendices are included to...significant entities to be protected. While the distributed finite field approach afforded adequate protection, its applicability was restricted and
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
On the use of distributed sensing in control of large flexible spacecraft
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave
1990-01-01
Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.
39 CFR 501.14 - Postage Evidencing System inventory control processes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Postage Evidencing System inventory control... control processes. (a) Each authorized provider of Postage Evidencing Systems must permanently hold title... sufficient facilities for and records of the distribution, control, storage, maintenance, repair, replacement...
Audit of the internal controls over the processing of oil overcharge refunds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-03-31
This report is on internal controls over the processing of oil overcharge refunds. The Office of Hearings and Appeals administers the distribution of refunds to parties that were overcharged during the period of petroleum price controls. The refund process was initiated in 1979. As of September 30, 1991, Hearings and Appeals had received over 200,000 applications for refunds. It had granted refunds with a total value of more than 600 million on about 160,000 applications, with 26,636 applications pending. The objectie of the audit was to evaluate the adequacy of Hearings and Appeals' internal controls over refund practices and procedures,more » specifically those used to ensure that claims approved were complete, systematically processed, and properly distributed.« less
Audit of the internal controls over the processing of oil overcharge refunds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-03-31
This report is on internal controls over the processing of oil overcharge refunds. The Office of Hearings and Appeals administers the distribution of refunds to parties that were overcharged during the period of petroleum price controls. The refund process was initiated in 1979. As of September 30, 1991, Hearings and Appeals had received over 200,000 applications for refunds. It had granted refunds with a total value of more than 600 million on about 160,000 applications, with 26,636 applications pending. The objectie of the audit was to evaluate the adequacy of Hearings and Appeals` internal controls over refund practices and procedures,more » specifically those used to ensure that claims approved were complete, systematically processed, and properly distributed.« less
PILOT: An intelligent distributed operations support system
NASA Technical Reports Server (NTRS)
Rasmussen, Arthur N.
1993-01-01
The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.
2003-06-27
KENNEDY SPACE CENTER, FLA. - At Vandenberg Air Force Base, Calif., the Pegasus launch vehicle is moved toward its hangar. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-27
KENNEDY SPACE CENTER, FLA. - The Pegasus launch vehicle is moved back to its hangar at Vandenberg Air Force Base, Calif. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - The SciSat-1 spacecraft is uncrated at Vandenberg Air Force Base, Calif. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - The SciSat-1 spacecraft is revealed after being uncrated at Vandenberg Air Force Base, Calif. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-26
KENNEDY SPACE CENTER, FLA. - Workers at Vandenberg Air Force Base, Calif., prepare to move the SciSat-1 spacecraft. SciSat-1 weighs approximately 330 pounds and will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-06-27
KENNEDY SPACE CENTER, FLA. - At Vandenberg Air Force Base, Calif., the Pegasus launch vehicle is moved into its hangar. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
Automated Power-Distribution System
NASA Technical Reports Server (NTRS)
Thomason, Cindy; Anderson, Paul M.; Martin, James A.
1990-01-01
Automated power-distribution system monitors and controls electrical power to modules in network. Handles both 208-V, 20-kHz single-phase alternating current and 120- to 150-V direct current. Power distributed to load modules from power-distribution control units (PDCU's) via subsystem distributors. Ring busses carry power to PDCU's from power source. Needs minimal attention. Detects faults and also protects against them. Potential applications include autonomous land vehicles and automated industrial process systems.
Intelligent Control of Micro Grid: A Big Data-Based Control Center
NASA Astrophysics Data System (ADS)
Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng
2018-01-01
In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
Design, implementation and application of distributed order PI control.
Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan
2013-05-01
In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Red mud flocculation process in alumina production
NASA Astrophysics Data System (ADS)
Fedorova, E. R.; Firsov, A. Yu
2018-05-01
The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Distributed control network for optogenetic experiments
NASA Astrophysics Data System (ADS)
Kasprowicz, G.; Juszczyk, B.; Mankiewicz, L.
2014-11-01
Nowadays optogenetic experiments are constructed to examine social behavioural relations in groups of animals. A novel concept of implantable device with distributed control network and advanced positioning capabilities is proposed. It is based on wireless energy transfer technology, micro-power radio interface and advanced signal processing.
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.
Distributed digital signal processors for multi-body flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.
1992-01-01
Multi-body flexible structures, such as those currently under investigation in spacecraft design, are large scale (high-order) dimensional systems. Controlling and filtering such structures is a computationally complex problem. This is particularly important when many sensors and actuators are located along the structure and need to be processed in real time. This report summarizes research activity focused on solving the signal processing (that is, information processing) issues of multi-body structures. A distributed architecture is developed in which single loop processors are employed for local filtering and control. By implementing such a philosophy with an embedded controller configuration, a supervising controller may be used to process global data and make global decisions as the local devices are processing local information. A hardware testbed, a position controller system for a servo motor, is employed to illustrate the capabilities of the embedded controller structure. Several filtering and control structures which can be modeled as rational functions can be implemented on the system developed in this research effort. Thus the results of the study provide a support tool for many Control/Structure Interaction (CSI) NASA testbeds such as the Evolutionary model and the nine-bay truss structure.
High-performance data processing using distributed computing on the SOLIS project
NASA Astrophysics Data System (ADS)
Wampler, Stephen
2002-12-01
The SOLIS solar telescope collects data at a high rate, resulting in 500 GB of raw data each day. The SOLIS Data Handling System (DHS) has been designed to quickly process this data down to 156 GB of reduced data. The DHS design uses pools of distributed reduction processes that are allocated to different observations as needed. A farm of 10 dual-cpu Linux boxes contains the pools of reduction processes. Control is through CORBA and data is stored on a fibre channel storage area network (SAN). Three other Linux boxes are responsible for pulling data from the instruments using SAN-based ringbuffers. Control applications are Java-based while the reduction processes are written in C++. This paper presents the overall design of the SOLIS DHS and provides details on the approach used to control the pooled reduction processes. The various strategies used to manage the high data rates are also covered.
2003-06-27
KENNEDY SPACE CENTER, FLA. - Inside the hangar at Vandenberg Air Force Base, Calif., workers wait for the Pegasus launch vehicle to be moved inside. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The scientific mission of SciSat-1 is to measure and understand the chemical processes that control the distribution of ozone in the Earth’s atmosphere, particularly at high altitudes. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Snipes, J. S.; Galgalikar, R.; Ramaswami, S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2014-09-01
In our recent work, a multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process was introduced. The model is of a modular type and comprises five modules, each designed to handle a specific aspect of the GMAW process, i.e.: (i) electro-dynamics of the welding-gun; (ii) radiation-/convection-controlled heat transfer from the electric-arc to the workpiece and mass transfer from the filler-metal consumable electrode to the weld; (iii) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (iv) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; and (v) spatial distribution of the as-welded material mechanical properties. In the present work, the GMAW process model has been upgraded with respect to its predictive capabilities regarding the spatial distribution of the mechanical properties controlling the ballistic-limit (i.e., penetration-resistance) of the weld. The model is upgraded through the introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance of the armor may become controlled by the (often inferior) ballistic limits of its weld (fusion and heat-affected) zones. To demonstrate the utility of the upgraded GMAW process model, it is next applied to the case of butt-welding of a prototypical high-hardness armor-grade martensitic steel, MIL A46100. The model predictions concerning the spatial distribution of the material microstructure and ballistic-limit-controlling mechanical properties within the MIL A46100 butt-weld are found to be consistent with prior observations and general expectations.
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Automated Power Systems Management (APSM)
NASA Technical Reports Server (NTRS)
Bridgeforth, A. O.
1981-01-01
A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
40 CFR 750.30 - Applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Processing and Distribution in Commerce Exemptions § 750.30 Applicability. Sections 750.30-750.41 apply to all rulemakings under authority of section 6(e)(3)(B) of the Toxic Substances Control Act (TSCA), 15 U.S.C. 2605(e)(3)(B) with respect to petitions for PCB processing and distribution in commerce...
Environmental Control and Life Support Systems
NASA Technical Reports Server (NTRS)
Engel, Joshua Allen
2017-01-01
The Environmental Control System provides a controlled air purge to Orion and SLS. The ECS performs this function by processing 100% ambient air while simultaneously controlling temperature, pressure, humidity, cleanliness and purge distribution.
Electric power processing, distribution, management and energy storage
NASA Astrophysics Data System (ADS)
Giudici, R. J.
1980-07-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Electric power processing, distribution, management and energy storage
NASA Technical Reports Server (NTRS)
Giudici, R. J.
1980-01-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Detection of global state predicates
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Neiger, Gil
1991-01-01
The problem addressed here arises in the context of Meta: how can a set of processes monitor the state of a distributed application in a consistent manner? For example, consider the simple distributed application as shown here. Each of the three processes in the application has a light, and the control processes would each like to take an action when some specified subset of the lights are on. The application processes are instrumented with stubs that determine when the process turns its lights on or off. This information is disseminated to the control processes, each of which then determines when its condition of interest is met. Meta is built on top of the ISIS toolkit, and so we first built the sensor dissemination mechanism using atomic broadcast. Atomic broadcast guarantees that all recipients receive the messages in the same order and that this order is consistent with causality. Unfortunately, the control processes are somewhat limited in what they can deduce when they find that their condition of interest holds.
NASA Technical Reports Server (NTRS)
Williams, G. M.; Fraser, J. C.
1991-01-01
The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.
Distributed automatic control of technological processes in conditions of weightlessness
NASA Technical Reports Server (NTRS)
Kukhtenko, A. I.; Merkulov, V. I.; Samoylenko, Y. I.; Ladikov-Royev, Y. P.
1986-01-01
Some problems associated with the automatic control of liquid metal and plasma systems under conditions of weightlessness are examined, with particular reference to the problem of stability of liquid equilibrium configurations. The theoretical fundamentals of automatic control of processes in electrically conducting continuous media are outlined, and means of using electromagnetic fields for simulating technological processes in a space environment are discussed.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Gao, Changwei; Liu, Xiaoming; Chen, Hai
2017-08-22
This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... proposes to revise its standards to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-24
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control costs. DATES: Effective Date...
Methods and tools for profiling and control of distributed systems
NASA Astrophysics Data System (ADS)
Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.
2018-02-01
This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGoldrick, P.R.
1981-01-01
The Mirror Fusion Test Facility (MFTF) is a complex facility requiring a highly-computerized Supervisory Control and Diagnostics System (SCDS) to monitor and provide control over ten subsystems; three of which require true process control. SCDS will provide physicists with a method of studying machine and plasma behavior by acquiring and processing up to four megabytes of plasma diagnostic information every five minutes. A high degree of availability and throughput is provided by a distributed computer system (nine 32-bit minicomputers on shared memory). Data, distributed across SCDS, is managed by a high-bandwidth Distributed Database Management System. The MFTF operators' control roommore » consoles use color television monitors with touch sensitive screens; this is a totally new approach. The method of handling deviations to normal machine operation and how the operator should be notified and assisted in the resolution of problems has been studied and a system designed.« less
System and method for secure group transactions
Goldsmith, Steven Y [Rochester, MN
2006-04-25
A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.
NASA Technical Reports Server (NTRS)
Pordes, Ruth (Editor)
1989-01-01
Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.
Montana Curriculum Guidelines for Distributive Education. Revised.
ERIC Educational Resources Information Center
Harris, Ron, Ed.
These distributive education curriculum guidelines are intended to provide Montana teachers with teaching information for 11 units. Units cover introduction to marketing and distributive education, human relations and communications, operations and control, processes involved in buying for resale, merchandise handling, sales promotion, sales and…
Resource depletion promotes automatic processing: implications for distribution of practice.
Scheel, Matthew H
2010-12-01
Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wang, Xiaorui; Zhe Zhang, Yun
2018-07-01
By employing the different topological charges of a Laguerre–Gaussian beam as a qubit, we experimentally demonstrate a controlled-NOT (CNOT) gate with light beams carrying orbital angular momentum via a photonic band gap structure in a hot atomic ensemble. Through a degenerate four-wave mixing process, the spatial distribution of the CNOT gate including splitting and spatial shift can be affected by the Kerr nonlinear effect in multilevel atomic systems. Moreover, the intensity variations of the CNOT gate can be controlled by the relative phase modulation. This research can be useful for applications in quantum information processing.
Computer-Controlled System for Plasma Ion Energy Auto-Analyzer
NASA Astrophysics Data System (ADS)
Wu, Xian-qiu; Chen, Jun-fang; Jiang, Zhen-mei; Zhong, Qing-hua; Xiong, Yu-ying; Wu, Kai-hua
2003-02-01
A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by Lab VIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provides important parameters of plasma process techniques based on semiconductor devices and microelectronics.
The Control of Welding Deformation of the Three-Section Arm of Placing Boom of HB48B Pump Truck
NASA Astrophysics Data System (ADS)
Wang, Zhi-ling
2018-02-01
The concrete pump truck is the construction equipment of conveying concrete with self contained base plate and distributing boom. It integrates the pump transport mechanism of the concrete pump, and the hydraulic roll-folding type distributing boom used to distribute materials, and the supporting mechanism into the automobile chassis, and it is the concrete conveying equipment with high efficient and the functions of driving, pumping, and distributing materials. The placing boom of the concrete pump truck is the main force member in the pump parts with bearing great pressure, and its stress condition is complex. Taking the HB48B placing boom as an example, this paper analyzes and studies the deformation produced by placing boom of pump truck, and then obtains some main factors affecting the welding deformation. Through the riveter “joint” size, we controlled the process parameters, post-welding processing, and other aspects. These measures had some practical significance to prevent, control, and reduce the deformation of welding.
Reducing lumber thickness variation using real-time statistical process control
Thomas M. Young; Brian H. Bond; Jan Wiedenbeck
2002-01-01
A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
Research and design of intelligent distributed traffic signal light control system based on CAN bus
NASA Astrophysics Data System (ADS)
Chen, Yu
2007-12-01
Intelligent distributed traffic signal light control system was designed based on technologies of infrared, CAN bus, single chip microprocessor (SCM), etc. The traffic flow signal is processed with the core of SCM AT89C51. At the same time, the SCM controls the CAN bus controller SJA1000/transceiver PCA82C250 to build a CAN bus communication system to transmit data. Moreover, up PC realizes to connect and communicate with SCM through USBCAN chip PDIUSBD12. The distributed traffic signal light control system with three control styles of Vehicle flux, remote and PC is designed. This paper introduces the system composition method and parts of hardware/software design in detail.
Cognitive process modelling of controllers in en route air traffic control.
Inoue, Satoru; Furuta, Kazuo; Nakata, Keiichi; Kanno, Taro; Aoyama, Hisae; Brown, Mark
2012-01-01
In recent years, various efforts have been made in air traffic control (ATC) to maintain traffic safety and efficiency in the face of increasing air traffic demands. ATC is a complex process that depends to a large degree on human capabilities, and so understanding how controllers carry out their tasks is an important issue in the design and development of ATC systems. In particular, the human factor is considered to be a serious problem in ATC safety and has been identified as a causal factor in both major and minor incidents. There is, therefore, a need to analyse the mechanisms by which errors occur due to complex factors and to develop systems that can deal with these errors. From the cognitive process perspective, it is essential that system developers have an understanding of the more complex working processes that involve the cooperative work of multiple controllers. Distributed cognition is a methodological framework for analysing cognitive processes that span multiple actors mediated by technology. In this research, we attempt to analyse and model interactions that take place in en route ATC systems based on distributed cognition. We examine the functional problems in an ATC system from a human factors perspective, and conclude by identifying certain measures by which to address these problems. This research focuses on the analysis of air traffic controllers' tasks for en route ATC and modelling controllers' cognitive processes. This research focuses on an experimental study to gain a better understanding of controllers' cognitive processes in air traffic control. We conducted ethnographic observations and then analysed the data to develop a model of controllers' cognitive process. This analysis revealed that strategic routines are applicable to decision making.
40 CFR 763.169 - Distribution in commerce prohibitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.169 Distribution in commerce... States or for export, any of the asbestos-containing products listed at § 763.165(a). (b) After August 25...
Wang, Xi-fen; Zhou, Huai-chun
2005-01-01
The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.
Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2005-01-01
In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
VOLTTRON is an agent execution platform providing services to its agents that allow them to easily communicate with physical devices and other resources. VOLTTRON delivers an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions. VOLTTRON can independently manage a wide range of applications, such as HVAC systems, electric vehicles, distributed energy or entire building loads, leading to improved operational efficiency.
1979-12-01
Links between processes can be aLlocated strictLy to controL functions. In fact, the degree of separation of control and data is an important research is...delays or Loss of control messages. Cognoscienti agree that message-passing IPC schemes are equivalent in "power" to schemes which employ shared...THEORETICAL WORK Page 55 SECTION 6 THEORETICAL WORK 6.1 WORKING GRUP JIM REP.OR STRUCTURE of Discussion: Distributed system without central (or any) control
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Enhanced High Performance Power Compensation Methodology by IPFC Using PIGBT-IDVR
Arumugom, Subramanian; Rajaram, Marimuthu
2015-01-01
Currently, power systems are involuntarily controlled without high speed control and are frequently initiated, therefore resulting in a slow process when compared with static electronic devices. Among various power interruptions in power supply systems, voltage dips play a central role in causing disruption. The dynamic voltage restorer (DVR) is a process based on voltage control that compensates for line transients in the distributed system. To overcome these issues and to achieve a higher speed, a new methodology called the Parallel IGBT-Based Interline Dynamic Voltage Restorer (PIGBT-IDVR) method has been proposed, which mainly spotlights the dynamic processing of energy reloads in common dc-linked energy storage with less adaptive transition. The interline power flow controller (IPFC) scheme has been employed to manage the power transmission between the lines and the restorer method for controlling the reactive power in the individual lines. By employing the proposed methodology, the failure of a distributed system has been avoided and provides better performance than the existing methodologies. PMID:26613101
Analysis Using Bi-Spectral Related Technique
1993-11-17
filtering is employed as the data is processed (equation 1). Earlier results have shown that in contrast to the Wigner - Ville Distribution ( WVD ) no spectral...Technique by-o -~ Ralph Hippenstiel November 17, 1993 94 2 22 1 0 Approved for public reslease; distribution unlimited. Prepared for: Naval Command Control...Government. 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION ’.ODE Approved for public relkase; distribution unlimited. 13. ABSTRACT (Maximum
Mathematical model of whole-process calculation for bottom-blowing copper smelting
NASA Astrophysics Data System (ADS)
Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song
2017-11-01
The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.
Controls on the distribution of alkylphenols and BTEX in oilfield waters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dale, J.D.; Aplin, A.C.; Larter, S.R.
1996-10-01
Controls on the abundance of alkylphenols and BTEX in oilfield waters are poorly understood, but are important because these species are the main dissolved pollutants in produced waters and may also be used as indicators of both the proximity and migration range of petroleum. Using (1) measurements of alkyl phenols and BTEX in oilfield waters and associated petroleums, and (b) oil/water partition coefficients under subsurface conditions we conclude that: (1) The distribution of alkylphenols and BTEX in formation waters are controlled by partition equilibrium with petroleum. Phenol and benzene typically account for 50% of total phenols and total BTEX respectively.more » (2) The concentrations of alkylphenols and BTEX in produced waters equilibriated with oil in reservoirs or in separator systems vary predictably as a function of pressure, temperature and salinity. This suggests that oil/water partition is the primary control influencing the distribution of alkylphenols and BTEX in oilfield waters and that other processes such as hydrolysis processes at the oil-water contact are secondary.« less
Hierarchical Process Composition: Dynamic Maintenance of Structure in a Distributed Environment
1988-01-01
One prominent hne of research stresses the independence of address space and thread of control, and the resulting efficiencies due to shared memory...cooperating processes. StarOS focuses on case of use and a general capability mechanism, while Medusa stresses the effect of distributed hardware on system...process structure and the asynchrony among agents and between agents and sources of failure. By stressing dynamic structure, we are led to adopt an
The distribution of hillslope-channel interactions in a rangeland watershed
Leslie M. Reid
1998-01-01
The distribution of erosion and deposition in a basin--and thus of the major controls on basin evolution--is dependent upon the local balance between sediment transport and sediment supply. This balance, in turn, reflects the nature, strength, and distribution of interactions between hillslope and channel processes.
Combustion distribution control using the extremum seeking algorithm
NASA Astrophysics Data System (ADS)
Marjanovic, A.; Krstic, M.; Djurovic, Z.; Kvascev, G.; Papic, V.
2014-12-01
Quality regulation of the combustion process inside the furnace is the basis of high demands for increasing robustness, safety and efficiency of thermal power plants. The paper considers the possibility of spatial temperature distribution control inside the boiler, based on the correction of distribution of coal over the mills. Such control system ensures the maintenance of the flame focus away from the walls of the boiler, and thus preserves the equipment and reduces the possibility of ash slugging. At the same time, uniform heat dissipation over mills enhances the energy efficiency of the boiler, while reducing the pollution of the system. A constrained multivariable extremum seeking algorithm is proposed as a tool for combustion process optimization with the main objective of centralizing the flame in the furnace. Simulations are conducted on a model corresponding to the 350MW boiler of the Nikola Tesla Power Plant, in Obrenovac, Serbia.
Shope, William G.; ,
1987-01-01
The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.
1983-11-01
transmission, FM(R) will only have to hold one message. 3. Program Control Block (PCB) The PCB ( Deitel 82] will be maintained by the Executive in...and Use of Kernel to Process Interrupts 35 10. Layered Operating System Design 38 11. Program Control Block Table 43 12. Ready List Data Structure 45 13...examples of fully distributed systems in operation. An objective of the NPS research program for SPLICE is to advance our knowledge of distributed
NASA Technical Reports Server (NTRS)
Watson, R. T.; Geller, M. A.; Stolarski, R. S.; Hampson, R. F.
1986-01-01
The state of knowledge of the upper atmosphere was assessed as of January 1986. The physical, chemical, and radiative processes which control the spatial and temporal distribution of ozone in the atmosphere; the predicted magnitude of ozone perturbations and climate changes for a variety of trace gas scenarios; and the ozone and temperature data used to detect the presence or absence of a long term trend were discussed. This assessment report was written by a small group of NASA scientists, was peer reviewed, and is based primarily on the comprehensive international assessment document entitled Atmospheric Ozone 1985: Assessment of Our Understanding of the Processes Controlling Its Present Distribution and Change, to be published as the World Meteorological Organization Global Ozone Research and Monitoring Project Report No. 16.
Distributed Secure Coordinated Control for Multiagent Systems Under Strategic Attacks.
Feng, Zhi; Wen, Guanghui; Hu, Guoqiang
2017-05-01
This paper studies a distributed secure consensus tracking control problem for multiagent systems subject to strategic cyber attacks modeled by a random Markov process. A hybrid stochastic secure control framework is established for designing a distributed secure control law such that mean-square exponential consensus tracking is achieved. A connectivity restoration mechanism is considered and the properties on attack frequency and attack length rate are investigated, respectively. Based on the solutions of an algebraic Riccati equation and an algebraic Riccati inequality, a procedure to select the control gains is provided and stability analysis is studied by using Lyapunov's method.. The effect of strategic attacks on discrete-time systems is also investigated. Finally, numerical examples are provided to illustrate the effectiveness of theoretical analysis.
Cloud-based distributed control of unmanned systems
NASA Astrophysics Data System (ADS)
Nguyen, Kim B.; Powell, Darren N.; Yetman, Charles; August, Michael; Alderson, Susan L.; Raney, Christopher J.
2015-05-01
Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).
Optical distributed sensors for feedback control: Characterization of photorefractive resonator
NASA Technical Reports Server (NTRS)
Indebetouw, Guy; Lindner, D. K.
1992-01-01
The aim of the project was to explore, define, and assess the possibilities of optical distributed sensing for feedback control. This type of sensor, which may have some impacts in the dynamic control of deformable structures and the monitoring of small displacements, can be divided into data acquisition, data processing, and control design. Analogue optical techniques, because they are noninvasive and afford massive parallelism may play a significant role in the acquisition and the preprocessing of the data for such a sensor. Assessing these possibilities was the aim of the first stage of this project. The scope of the proposed research was limited to: (1) the characterization of photorefractive resonators and the assessment of their possible use as a distributed optical processing element; and (2) the design of a control system utilizing signals from distributed sensors. The results include a numerical and experimental study of the resonator below threshold, an experimental study of the effect of the resonator's transverse confinement on its dynamics above threshold, a numerical study of the resonator above threshold using a modal expansion approach, and the experimental test of this model. A detailed account of each investigation, including methodology and analysis of the results are also included along with reprints of published and submitted papers.
Stability and performance analysis of a jump linear control system subject to digital upsets
NASA Astrophysics Data System (ADS)
Wang, Rui; Sun, Hui; Ma, Zhen-Yang
2015-04-01
This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).
NASA Astrophysics Data System (ADS)
Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.
2017-09-01
Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.
Neural networks for continuous online learning and control.
Choy, Min Chee; Srinivasan, Dipti; Cheu, Ruey Long
2006-11-01
This paper proposes a new hybrid neural network (NN) model that employs a multistage online learning process to solve the distributed control problem with an infinite horizon. Various techniques such as reinforcement learning and evolutionary algorithm are used to design the multistage online learning process. For this paper, the infinite horizon distributed control problem is implemented in the form of real-time distributed traffic signal control for intersections in a large-scale traffic network. The hybrid neural network model is used to design each of the local traffic signal controllers at the respective intersections. As the state of the traffic network changes due to random fluctuation of traffic volumes, the NN-based local controllers will need to adapt to the changing dynamics in order to provide effective traffic signal control and to prevent the traffic network from becoming overcongested. Such a problem is especially challenging if the local controllers are used for an infinite horizon problem where online learning has to take place continuously once the controllers are implemented into the traffic network. A comprehensive simulation model of a section of the Central Business District (CBD) of Singapore has been developed using PARAMICS microscopic simulation program. As the complexity of the simulation increases, results show that the hybrid NN model provides significant improvement in traffic conditions when evaluated against an existing traffic signal control algorithm as well as a new, continuously updated simultaneous perturbation stochastic approximation-based neural network (SPSA-NN). Using the hybrid NN model, the total mean delay of each vehicle has been reduced by 78% and the total mean stoppage time of each vehicle has been reduced by 84% compared to the existing traffic signal control algorithm. This shows the efficacy of the hybrid NN model in solving large-scale traffic signal control problem in a distributed manner. Also, it indicates the possibility of using the hybrid NN model for other applications that are similar in nature as the infinite horizon distributed control problem.
Effects of a chirped bias voltage on ion energy distributions in inductively coupled plasma reactors
NASA Astrophysics Data System (ADS)
Lanham, Steven J.; Kushner, Mark J.
2017-08-01
The metrics for controlling reactive fluxes to wafers for microelectronics processing are becoming more stringent as feature sizes continue to shrink. Recent strategies for controlling ion energy distributions to the wafer involve using several different frequencies and/or pulsed powers. Although effective, these strategies are often costly or present challenges in impedance matching. With the advent of matching schemes for wide band amplifiers, other strategies to customize ion energy distributions become available. In this paper, we discuss results from a computational investigation of biasing substrates using chirped frequencies in high density, electronegative inductively coupled plasmas. Depending on the frequency range and chirp duration, the resulting ion energy distributions exhibit components sampled from the entire frequency range. However, the chirping process also produces transient shifts in the self-generated dc bias due to the reapportionment of displacement and conduction with frequency to balance the current in the system. The dynamics of the dc bias can also be leveraged towards customizing ion energy distributions.
NASA Astrophysics Data System (ADS)
Kodama, Yu; Hamagami, Tomoki
Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2015-01-01
Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.
On some control problems of dynamic of reactor
NASA Astrophysics Data System (ADS)
Baskakov, A. V.; Volkov, N. P.
2017-12-01
The paper analyzes controllability of the transient processes in some problems of nuclear reactor dynamics. In this case, the mathematical model of nuclear reactor dynamics is described by a system of integro-differential equations consisting of the non-stationary anisotropic multi-velocity kinetic equation of neutron transport and the balance equation of delayed neutrons. The paper defines the formulation of the linear problem on control of transient processes in nuclear reactors with application of spatially distributed actions on internal neutron sources, and the formulation of the nonlinear problems on control of transient processes with application of spatially distributed actions on the neutron absorption coefficient and the neutron scattering indicatrix. The required control actions depend on the spatial and velocity coordinates. The theorems on existence and uniqueness of these control actions are proved in the paper. To do this, the control problems mentioned above are reduced to equivalent systems of integral equations. Existence and uniqueness of the solution for this system of integral equations is proved by the method of successive approximations, which makes it possible to construct an iterative scheme for numerical analyses of transient processes in a given nuclear reactor with application of the developed mathematical model. Sufficient conditions for controllability of transient processes are also obtained. In conclusion, a connection is made between the control problems and the observation problems, which, by to the given information, allow us to reconstruct either the function of internal neutron sources, or the neutron absorption coefficient, or the neutron scattering indicatrix....
Centralized and distributed control architectures under Foundation Fieldbus network.
Persechini, Maria Auxiliadora Muanis; Jota, Fábio Gonçalves
2013-01-01
This paper aims at discussing possible automation and control system architectures based on fieldbus networks in which the controllers can be implemented either in a centralized or in a distributed form. An experimental setup is used to demonstrate some of the addressed issues. The control and automation architecture is composed of a supervisory system, a programmable logic controller and various other devices connected to a Foundation Fieldbus H1 network. The procedures used in the network configuration, in the process modelling and in the design and implementation of controllers are described. The specificities of each one of the considered logical organizations are also discussed. Finally, experimental results are analysed using an algorithm for the assessment of control loops to compare the performances between the centralized and the distributed implementations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
2008-07-01
generation of process partitioning, a thread pipelining becomes possible. In this paper we briefly summarize the requirements and trends for FADEC based... FADEC environment, presenting a hypothetical realization of an example application. Finally we discuss the application of Time-Triggered...based control applications of the future. 15. SUBJECT TERMS Gas turbine, FADEC , Multi-core processing technology, disturbed based control
Proceedings of the 3rd Annual SCOLE Workshop
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1987-01-01
Topics addressed include: modeling and controlling the Spacecraft Control Laboratory Experiment (SCOLE) configurations; slewing maneuvers; mathematical models; vibration damping; gravitational effects; structural dynamics; finite element method; distributed parameter system; on-line pulse control; stability augmentation; and stochastic processes.
Design distributed simulation platform for vehicle management system
NASA Astrophysics Data System (ADS)
Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua
2006-11-01
Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.
1979-12-01
The Marine Corps Tactical Command and Control System (MTACCS) is expected to provide increased decision making speed and power through automated ... processing and display of data which previously was processed manually. The landing Force Organizational Systems Study (LFOSS) has challenged Marines to
NASA Technical Reports Server (NTRS)
Jenkins, George
1986-01-01
Prelaunch, launch, mission, and landing distribution of RF and hardline uplink/downlink information between Space Shuttle Orbiter/cargo elements, tracking antennas, and control centers at JSC, KSC, MSFC, GSFC, ESMC/RCC, and Sunnyvale are presented as functional block diagrams. Typical mismatch problems encountered during spacecraft-to-project control center telemetry transmissions are listed along with new items for future support enhancement.
NASA Astrophysics Data System (ADS)
Jenkins, George
Prelaunch, launch, mission, and landing distribution of RF and hardline uplink/downlink information between Space Shuttle Orbiter/cargo elements, tracking antennas, and control centers at JSC, KSC, MSFC, GSFC, ESMC/RCC, and Sunnyvale are presented as functional block diagrams. Typical mismatch problems encountered during spacecraft-to-project control center telemetry transmissions are listed along with new items for future support enhancement.
Control of Groundwater Remediation Process as Distributed Parameter System
NASA Astrophysics Data System (ADS)
Mendel, M.; Kovács, T.; Hulkó, G.
2014-12-01
Pollution of groundwater requires the implementation of appropriate solutions which can be deployed for several years. The case of local groundwater contamination and its subsequent spread may result in contamination of drinking water sources or other disasters. This publication aims to design and demonstrate control of pumping wells for a model task of groundwater remediation. The task consists of appropriately spaced soil with input parameters, pumping wells and control system. Model of controlled system is made in the program MODFLOW using the finitedifference method as distributed parameter system. Control problem is solved by DPS Blockset for MATLAB & Simulink.
Pessi, Jenni; Lassila, Ilkka; Meriläinen, Antti; Räikkönen, Heikki; Hæggström, Edward; Yliruusi, Jouko
2016-08-01
We introduce a robust, stable, and reproducible method to produce nanoparticles based on expansion of supercritical solutions using carbon dioxide as a solvent. The method, controlled expansion of supercritical solution (CESS), uses controlled mass transfer, flow, pressure reduction, and particle collection in dry ice. CESS offers control over the crystallization process as the pressure in the system is reduced according to a specific profile. Particle formation takes place before the exit nozzle, and condensation is the main mechanism for postnucleation particle growth. A 2-step gradient pressure reduction is used to prevent Mach disk formation and particle growth by coagulation. Controlled particle growth keeps the production process stable. With CESS, we produced piroxicam nanoparticles, 60 mg/h, featuring narrow size distribution (176 ± 53 nm). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Microelectromechanical Systems
NASA Technical Reports Server (NTRS)
Gabriel, Kaigham J.
1995-01-01
Micro-electromechanical systems (MEMS) is an enabling technology that merges computation and communication with sensing and actuation to change the way people and machines interact with the physical world. MEMS is a manufacturing technology that will impact widespread applications including: miniature inertial measurement measurement units for competent munitions and personal navigation; distributed unattended sensors; mass data storage devices; miniature analytical instruments; embedded pressure sensors; non-invasive biomedical sensors; fiber-optics components and networks; distributed aerodynamic control; and on-demand structural strength. The long term goal of ARPA's MEMS program is to merge information processing with sensing and actuation to realize new systems and strategies for both perceiving and controlling systems, processes, and the environment. The MEMS program has three major thrusts: advanced devices and processes, system design, and infrastructure.
Intercommunications in Real Time, Redundant, Distributed Computer System
NASA Technical Reports Server (NTRS)
Zanger, H.
1980-01-01
An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.
Defining and Enabling Resiliency of Electric Distribution Systems With Multiple Microgrids
Chanda, Sayonsom; Srivastava, Anurag K.
2016-05-02
This paper presents a method for quantifying and enabling the resiliency of a power distribution system (PDS) using analytical hierarchical process and percolation theory. Using this metric, quantitative analysis can be done to analyze the impact of possible control decisions to pro-actively enable the resilient operation of distribution system with multiple microgrids and other resources. Developed resiliency metric can also be used in short term distribution system planning. The benefits of being able to quantify resiliency can help distribution system planning engineers and operators to justify control actions, compare different reconfiguration algorithms, develop proactive control actions to avert power systemmore » outage due to impending catastrophic weather situations or other adverse events. Validation of the proposed method is done using modified CERTS microgrids and a modified industrial distribution system. Furthermore, simulation results show topological and composite metric considering power system characteristics to quantify the resiliency of a distribution system with the proposed methodology, and improvements in resiliency using two-stage reconfiguration algorithm and multiple microgrids.« less
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
New Process Controls for the Hera Cryogenic Plant
NASA Astrophysics Data System (ADS)
Böckmann, T.; Clausen, M.; Gerke, Chr.; Prüß, K.; Schoeneburg, B.; Urbschat, P.
2010-04-01
The cryogenic plant built for the HERA accelerator at DESY in Hamburg (Germany) is now in operation for more than two decades. The commercial process control system for the cryogenic plant is in operation for the same time period. Ever since the operator stations, the control network and the CPU boards in the process controllers went through several upgrade stages. Only the centralized Input/Output system was kept unchanged. Many components have been running beyond the expected lifetime. The control system for one at the three parts of the cryogenic plant has been replaced recently by a distributed I/O system. The I/O nodes are connected to several Profibus-DP field busses. Profibus provides the infrastructure to attach intelligent sensors and actuators directly to the process controllers which run the open source process control software EPICS. This paper describes the modification process on all levels from cabling through I/O configuration, the process control software up to the operator displays.
Space Power Management and Distribution Status and Trends
NASA Technical Reports Server (NTRS)
Reppucci, G. M.; Biess, J. J.; Inouye, L.
1984-01-01
An overview of space power management and distribution (PMAD) is provided which encompasses historical and current technology trends. The PMAD components discussed include power source control, energy storage control, and load power processing electronic equipment. The status of distribution equipment comprised of rotary joints and power switchgear is evaluated based on power level trends in the public, military, and commercial sectors. Component level technology thrusts, as driven by perceived system level trends, are compared to technology status of piece-parts such as power semiconductors, capacitors, and magnetics to determine critical barriers.
Distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A real-time multi-tasking digital control system with rapid recovery capability is disclosed. The control system includes a plurality of computing units comprising a plurality of redundant processing units, with each of the processing units configured to generate one or more redundant control commands. One or more internal monitors are employed for detecting data errors in the control commands. One or more recovery triggers are provided for initiating rapid recovery of a processing unit if data errors are detected. The control system also includes a plurality of actuator control units each in operative communication with the computing units. The actuator control units are configured to initiate a rapid recovery if data errors are detected in one or more of the processing units. A plurality of smart actuators communicates with the actuator control units, and a plurality of redundant sensors communicates with the computing units.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
40 CFR 763.167 - Processing prohibitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.167 Processing prohibitions. (a..., any of the asbestos-containing products listed at § 763.165(a). (b) After August 26, 1996, no person...
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Reprographics Career Ladder AFSC 703X0.
1988-02-01
paper by hand adjust stitchers pack printed materials manually wax drill bit ends VI. PRODUCTION CONTROL PERSONNEL CLUSTER (STG033, N=38). Comprising...work requests notify customer of completed work verify duplicating requests maintain job logs manually 16 Two jobs were identified within this...E146 MAINTAIN LOGS OF JOBS PROCESSED 47 E138 DISTRIBUTE COMPLETED PRODUCTS 47 N441 MAINTAIN JOB LOGS MANUALLY 43 E169 PROCESS INCOMING DISTRIBUTION 6.l
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Flight deck benefits of integrated data link communication
NASA Technical Reports Server (NTRS)
Waller, Marvin C.
1992-01-01
A fixed-base, piloted simulation study was conducted to determine the operational benefits that result when air traffic control (ATC) instructions are transmitted to the deck of a transport aircraft over a digital data link. The ATC instructions include altitude, airspeed, heading, radio frequency, and route assignment data. The interface between the flight deck and the data link was integrated with other subsystems of the airplane to facilitate data management. Data from the ATC instructions were distributed to the flight guidance and control system, the navigation system, and an automatically tuned communication radio. The co-pilot initiated the automation-assisted data distribution process. Digital communications and automated data distribution were compared with conventional voice radio communication and manual input of data into other subsystems of the simulated aircraft. Less time was required in the combined communication and data management process when data link ATC communication was integrated with the other subsystems. The test subjects, commercial airline pilots, provided favorable evaluations of both the digital communication and data management processes.
Pareja, Lucía; Colazzo, Marcos; Pérez-Parada, Andrés; Besil, Natalia; Heinzen, Horacio; Böcking, Bernardo; Cesio, Verónica; Fernández-Alba, Amadeo R
2012-05-09
The results of an experiment to study the occurrence and distribution of pesticide residues during rice cropping and processing are reported. Four herbicides, nine fungicides, and two insecticides (azoxystrobin, byspiribac-sodium, carbendazim, clomazone, difenoconazole, epoxiconazole, isoprothiolane, kresoxim-methyl, propanil, quinclorac, tebuconazole, thiamethoxam, tricyclazole, trifloxystrobin, λ-cyhalotrin) were applied to an isolated rice-crop plot under controlled conditions, during the 2009-2010 cropping season in Uruguay. Paddy rice was harvested and industrially processed to brown rice, white rice, and rice bran, which were analyzed for pesticide residues using the original QuEChERS methodology and its citrate variation by LC-MS/MS and GC-MS. The distribution of pesticide residues was uneven among the different matrices. Ten different pesticide residues were found in paddy rice, seven in brown rice, and eight in rice bran. The highest concentrations were detected in paddy rice. These results provide information regarding the fate of pesticides in the rice food chain and its safety for consumers.
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
Transitioning from Distributed and Traditional to Distributed and Agile: An Experience Report
NASA Astrophysics Data System (ADS)
Wildt, Daniel; Prikladnicki, Rafael
Global companies that experienced extensive waterfall phased plans are trying to improve their existing processes to expedite team engagement. Agile methodologies have become an acceptable path to follow because it comprises project management as part of its practices. Agile practices have been used with the objective of simplifying project control through simple processes, easy to update documentation and higher team iteration over exhaustive documentation, focusing rather on team continuous improvement and aiming to add value to business processes. The purpose of this chapter is to describe the experience of a global multinational company on transitioning from distributed and traditional to distributed and agile. This company has development centers across North America, South America and Asia. This chapter covers challenges faced by the project teams of two pilot projects, including strengths of using agile practices in a globally distributed environment and practical recommendations for similar endeavors.
Cardea: Dynamic Access Control in Distributed Systems
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2004-01-01
Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.
7 CFR 160.43 - Licensed inspector to be disinterested.
Code of Federal Regulations, 2010 CFR
2010-01-01
... inspector to be disinterested. No person who determines or controls sales policies or methods of distribution of an eligible processing plant, or the selling prices of the naval stores processed at such plant...
Advanced Manufacturing Systems in Food Processing and Packaging Industry
NASA Astrophysics Data System (ADS)
Shafie Sani, Mohd; Aziz, Faieza Abdul
2013-06-01
In this paper, several advanced manufacturing systems in food processing and packaging industry are reviewed, including: biodegradable smart packaging and Nano composites, advanced automation control system consists of fieldbus technology, distributed control system and food safety inspection features. The main purpose of current technology in food processing and packaging industry is discussed due to major concern on efficiency of the plant process, productivity, quality, as well as safety. These application were chosen because they are robust, flexible, reconfigurable, preserve the quality of the food, and efficient.
Technologies for network-centric C4ISR
NASA Astrophysics Data System (ADS)
Dunkelberger, Kirk A.
2003-07-01
Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.
Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.
On the problem of modeling for parameter identification in distributed structures
NASA Technical Reports Server (NTRS)
Norris, Mark A.; Meirovitch, Leonard
1988-01-01
Structures are often characterized by parameters, such as mass and stiffness, that are spatially distributed. Parameter identification of distributed structures is subject to many of the difficulties involved in the modeling problem, and the choice of the model can greatly affect the results of the parameter identification process. Analogously to control spillover in the control of distributed-parameter systems, identification spillover is shown to exist as well and its effect is to degrade the parameter estimates. Moreover, as in modeling by the Rayleigh-Ritz method, it is shown that, for a Rayleigh-Ritz type identification algorithm, an inclusion principle exists in the identification of distributed-parameter systems as well, so that the identified natural frequencies approach the actual natural frequencies monotonically from above.
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiangqi; Wang, Jiyu; Mulcahy, David
This paper presents a voltage-load sensitivity matrix (VLSM) based voltage control method to deploy demand response resources for controlling voltage in high solar penetration distribution feeders. The IEEE 123-bus system in OpenDSS is used for testing the performance of the preliminary VLSM-based voltage control approach. A load disaggregation process is applied to disaggregate the total load profile at the feeder head to each load nodes along the feeder so that loads are modeled at residential house level. Measured solar generation profiles are used in the simulation to model the impact of solar power on distribution feeder voltage profiles. Different casemore » studies involving various PV penetration levels and installation locations have been performed. Simulation results show that the VLSM algorithm performance meets the voltage control requirements and is an effective voltage control strategy.« less
Resilient distributed control in the presence of misbehaving agents in networked control systems.
Zeng, Wente; Chow, Mo-Yuen
2014-11-01
In this paper, we study the problem of reaching a consensus among all the agents in the networked control systems (NCS) in the presence of misbehaving agents. A reputation-based resilient distributed control algorithm is first proposed for the leader-follower consensus network. The proposed algorithm embeds a resilience mechanism that includes four phases (detection, mitigation, identification, and update), into the control process in a distributed manner. At each phase, every agent only uses local and one-hop neighbors' information to identify and isolate the misbehaving agents, and even compensate their effect on the system. We then extend the proposed algorithm to the leaderless consensus network by introducing and adding two recovery schemes (rollback and excitation recovery) into the current framework to guarantee the accurate convergence of the well-behaving agents in NCS. The effectiveness of the proposed method is demonstrated through case studies in multirobot formation control and wireless sensor networks.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1993-01-01
The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
Preference for Modes of Dispute Resolution as a Function of Process and Decision Control
ERIC Educational Resources Information Center
Houlden, Pauline; And Others
1978-01-01
Research on procedural justice has suggested that the distribution of control among participants can be used to classify dispute-resolution procedures and may be an important determinant of preference for such procedures. This experiment demonstrates that control can be meaningfully divided into two components: control over the presentation of…
Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2003-01-01
The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.
Control structures for high speed processors
NASA Technical Reports Server (NTRS)
Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.
1982-01-01
A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.
Real time quantitative imaging for semiconductor crystal growth, control and characterization
NASA Technical Reports Server (NTRS)
Wargo, Michael J.
1991-01-01
A quantitative real time image processing system has been developed which can be software-reconfigured for semiconductor processing and characterization tasks. In thermal imager mode, 2D temperature distributions of semiconductor melt surfaces (900-1600 C) can be obtained with temperature and spatial resolutions better than 0.5 C and 0.5 mm, respectively, as demonstrated by analysis of melt surface thermal distributions. Temporal and spatial image processing techniques and multitasking computational capabilities convert such thermal imaging into a multimode sensor for crystal growth control. A second configuration of the image processing engine in conjunction with bright and dark field transmission optics is used to nonintrusively determine the microdistribution of free charge carriers and submicron sized crystalline defects in semiconductors. The IR absorption characteristics of wafers are determined with 10-micron spatial resolution and, after calibration, are converted into charge carrier density.
2017-09-01
AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Test and...ambiguities and identify high -value decision points? This thesis explores how formalization of these experience-based decisions as a process model...representing a T&E event may reveal high -value decision nodes where certain decisions carry more weight or potential for impacts to a successful test. The
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
Intelligent Systems for Power Management and Distribution
NASA Technical Reports Server (NTRS)
Button, Robert M.
2002-01-01
The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.
77 FR 23618 - Authority To Manufacture and Distribute Postage Evidencing Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-20
... revision of the rules concerning inventory controls for Postage Evidencing Systems (PES). These changes are... System inventory control processes. (a) Each authorized provider of Postage Evidencing Systems must... custody and control of Postage Evidencing Systems and must specifically authorize in writing the proposed...
A distributed Petri Net controller for a dual arm testbed
NASA Technical Reports Server (NTRS)
Bjanes, Atle
1991-01-01
This thesis describes the design and functionality of a Distributed Petri Net Controller (DPNC). The controller runs under X Windows to provide a graphical interface. The DPNC allows users to distribute a Petri Net across several host computers linked together via a TCP/IP interface. A sub-net executes on each host, interacting with the other sub-nets by passing a token vector from host to host. One host has a command window which monitors and controls the distributed controller. The input to the DPNC is a net definition file generated by Great SPN. Thus, a net may be designed, analyzed and verified using this package before implementation. The net is distributed to the hosts by tagging transitions that are host-critical with the appropriate host number. The controller will then distribute the remaining places and transitions to the hosts by generating the local nets, the local marking vectors and the global marking vector. Each transition can have one or more preconditions which must be fulfilled before the transition can fire, as well as one or more post-processes to be executed after the transition fires. These implement the actual input/output to the environment (machines, signals, etc.). The DPNC may also be used to simulate a Great SPN net since stochastic and deterministic firing rates are implemented in the controller for timed transitions.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network
NASA Astrophysics Data System (ADS)
Xu, Xiao-Feng
2018-03-01
Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.
NASA Astrophysics Data System (ADS)
Eleiwi, Fadi; Laleg-Kirati, Taous Meriem
2018-06-01
An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.
NASA Astrophysics Data System (ADS)
Essa, Mohammed Sh.; Chiad, Bahaa T.; Shafeeq, Omer Sh.
2017-09-01
Thin Films of Copper Oxide (CuO) absorption layer have been deposited using home-made Fully Computerized Spray Pyrolysis Deposition system FCSPD on glass substrates, at the nozzle to substrate distance equal to 20,35 cm, and computerized spray mode (continues spray, macro-control spray). The substrate temperature has been kept at 450 °c with the optional user can enter temperature tolerance values ± 5 °C. Also that fixed molar concentration of 0.1 M, and 2D platform speed or deposition platform speed of 4mm/s. more than 1000 instruction program code, and specific design of graphical user interface GUI to fully control the deposition process and real-time monitoring and controlling the deposition temperature at every 200 ms. The changing in the temperature has been recorded during deposition processes, in addition to all deposition parameters. The films have been characterized to evaluate the thermal distribution over the X, Y movable hot plate, the structure and optical energy gap, thermal and temperature distribution exhibited a good and uniform distribution over 20 cm2 hot plate area, X-ray diffraction (XRD) measurement revealed that the films are polycrystalline in nature and can be assigned to monoclinic CuO structure. Optical band gap varies from 1.5-1.66 eV depending on deposition parameter.
NASA Technical Reports Server (NTRS)
Nagaraja, K. S.; Kraft, R. H.
1999-01-01
The HSCT Flight Controls Group has developed longitudinal control laws, utilizing PTC aeroelastic flexible models to minimize aeroservoelastic interaction effects, for a number of flight conditions. The control law design process resulted in a higher order controller and utilized a large number of sensors distributed along the body for minimizing the flexibility effects. Processes were developed to implement these higher order control laws for performing the dynamic gust loads and flutter analyses. The processes and its validation were documented in Reference 2, for selected flight condition. The analytical results for additional flight conditions are presented in this document for further validation.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Software for Demonstration of Features of Chain Polymerization Processes
ERIC Educational Resources Information Center
Sosnowski, Stanislaw
2013-01-01
Free software for the demonstration of the features of homo- and copolymerization processes (free radical, controlled radical, and living) is described. The software is based on the Monte Carlo algorithms and offers insight into the kinetics, molecular weight distribution, and microstructure of the macromolecules formed in those processes. It also…
Control of Bethlehem's coke-oven battery A at Sparrow Point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, A.
1984-02-01
A new 6 m 80-oven compound-fired coke battery capable of producing in excess of 850,000 ton/year began production at Sparrow Point, Maryland, in 1982. The electrical, fuel distribution and control systems are described, together with the computer process control and monitoring systems.
Statistical transformation and the interpretation of inpatient glucose control data.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-03-01
To introduce a statistical method of assessing hospital-based non-intensive care unit (non-ICU) inpatient glucose control. Point-of-care blood glucose (POC-BG) data from hospital non-ICUs were extracted for January 1 through December 31, 2011. Glucose data distribution was examined before and after Box-Cox transformations and compared to normality. Different subsets of data were used to establish upper and lower control limits, and exponentially weighted moving average (EWMA) control charts were constructed from June, July, and October data as examples to determine if out-of-control events were identified differently in nontransformed versus transformed data. A total of 36,381 POC-BG values were analyzed. In all 3 monthly test samples, glucose distributions in nontransformed data were skewed but approached a normal distribution once transformed. Interpretation of out-of-control events from EWMA control chart analyses also revealed differences. In the June test data, an out-of-control process was identified at sample 53 with nontransformed data, whereas the transformed data remained in control for the duration of the observed period. Analysis of July data demonstrated an out-of-control process sooner in the transformed (sample 55) than nontransformed (sample 111) data, whereas for October, transformed data remained in control longer than nontransformed data. Statistical transformations increase the normal behavior of inpatient non-ICU glycemic data sets. The decision to transform glucose data could influence the interpretation and conclusions about the status of inpatient glycemic control. Further study is required to determine whether transformed versus nontransformed data influence clinical decisions or evaluation of interventions.
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
Discrete epidemic models with arbitrary stage distributions and applications to disease control.
Hernandez-Ceron, Nancy; Feng, Zhilan; Castillo-Chavez, Carlos
2013-10-01
W.O. Kermack and A.G. McKendrick introduced in their fundamental paper, A Contribution to the Mathematical Theory of Epidemics, published in 1927, a deterministic model that captured the qualitative dynamic behavior of single infectious disease outbreaks. A Kermack–McKendrick discrete-time general framework, motivated by the emergence of a multitude of models used to forecast the dynamics of epidemics, is introduced in this manuscript. Results that allow us to measure quantitatively the role of classical and general distributions on disease dynamics are presented. The case of the geometric distribution is used to evaluate the impact of waiting-time distributions on epidemiological processes or public health interventions. In short, the geometric distribution is used to set up the baseline or null epidemiological model used to test the relevance of realistic stage-period distribution on the dynamics of single epidemic outbreaks. A final size relationship involving the control reproduction number, a function of transmission parameters and the means of distributions used to model disease or intervention control measures, is computed. Model results and simulations highlight the inconsistencies in forecasting that emerge from the use of specific parametric distributions. Examples, using the geometric, Poisson and binomial distributions, are used to highlight the impact of the choices made in quantifying the risk posed by single outbreaks and the relative importance of various control measures.
Pianigiani, Elisa; Ierardi, Francesca; Fimiani, Michele
2013-12-01
Skin allografts represent an important therapeutic resource in the treatment of severe skin loss. The risk associated with application of processed tissues in humans is very low, however, human material always carries the risk of disease transmission. To minimise the risk of contamination of grafts, processing is carried out in clean rooms where air quality is monitored. Procedures and quality control tests are performed to standardise the production process and to guarantee the final product for human use. Since we only validate and distribute aseptic tissues, we conducted a study to determine what type of quality controls for skin processing are the most suitable for detecting processing errors and intercurrent contamination, and for faithfully mapping the process without unduly increasing production costs. Two different methods for quality control were statistically compared using the Fisher exact test. On the basis of the current study we selected our quality control procedure based on pre- and post-processing tissue controls, operator and environmental controls. Evaluation of the predictability of our control methods showed that tissue control was the most reliable method of revealing microbial contamination of grafts. We obtained 100 % sensitivity by doubling tissue controls, while maintaining high specificity (77 %).
USDA-ARS?s Scientific Manuscript database
Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...
Automated Power-Distribution System
NASA Technical Reports Server (NTRS)
Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.
1992-01-01
Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.
NASA Technical Reports Server (NTRS)
Forgoston, Eric; Tumin, Anatoli; Ashpis, David E.
2005-01-01
An analysis of the optimal control by blowing and suction in order to generate stream- wise velocity streaks is presented. The problem is examined using an iterative process that employs the Parabolized Stability Equations for an incompressible uid along with its adjoint equations. In particular, distributions of blowing and suction are computed for both the normal and tangential velocity perturbations for various choices of parameters.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Distributed Environment Control Using Wireless Sensor/Actuator Networks for Lighting Applications
Nakamura, Masayuki; Sakurai, Atsushi; Nakamura, Jiro
2009-01-01
We propose a decentralized algorithm to calculate the control signals for lights in wireless sensor/actuator networks. This algorithm uses an appropriate step size in the iterative process used for quickly computing the control signals. We demonstrate the accuracy and efficiency of this approach compared with the penalty method by using Mote-based mesh sensor networks. The estimation error of the new approach is one-eighth as large as that of the penalty method with one-fifth of its computation time. In addition, we describe our sensor/actuator node for distributed lighting control based on the decentralized algorithm and demonstrate its practical efficacy. PMID:22291525
7 CFR 252.7 - OMB control number.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION NATIONAL COMMODITY PROCESSING PROGRAM § 252.7 OMB... approved by the Office of Management and Budget under control number 0584-0325. ...
Proceedings of the 3rd Annual Conference on Aerospace Computational Control, volume 2
NASA Technical Reports Server (NTRS)
Bernard, Douglas E. (Editor); Man, Guy K. (Editor)
1989-01-01
This volume of the conference proceedings contain papers and discussions in the following topical areas: Parallel processing; Emerging integrated capabilities; Low order controllers; Real time simulation; Multibody component representation; User environment; and Distributed parameter techniques.
Magnetic agglomeration method for size control in the synthesis of magnetic nanoparticles
Huber, Dale L [Albuquerque, NM
2011-07-05
A method for controlling the size of chemically synthesized magnetic nanoparticles that employs magnetic interaction between particles to control particle size and does not rely on conventional kinetic control of the reaction to control particle size. The particles are caused to reversibly agglomerate and precipitate from solution; the size at which this occurs can be well controlled to provide a very narrow particle size distribution. The size of particles is controllable by the size of the surfactant employed in the process; controlling the size of the surfactant allows magnetic control of the agglomeration and precipitation processes. Agglomeration is used to effectively stop particle growth to provide a very narrow range of particle sizes.
NASA Astrophysics Data System (ADS)
Endrawati, Titin; Siregar, M. Tirtana
2018-03-01
PT Mentari Trans Nusantara is a company engaged in the distribution of goods from the manufacture of the product to the distributor branch of the customer so that the product distribution must be controlled directly from the PT Mentari Trans Nusantara Center for faster delivery process. Problems often occur on the expedition company which in charge in sending the goods although it has quite extensive networking. The company is less control over logistics management. Meanwhile, logistics distribution management control policy will affect the company's performance in distributing products to customer distributor branches and managing product inventory in distribution center. PT Mentari Trans Nusantara is an expedition company which engaged in good delivery, including in Jakarta. Logistics management performance is very important due to its related to the supply of goods from the central activities to the branches based oncustomer demand. Supply chain management performance is obviously depends on the location of both the distribution center and branches, the smoothness of transportation in the distribution and the availability of the product in the distribution center to meet the demand in order to avoid losing sales. This study concluded that the company could be more efficient and effective in minimizing the risks of loses by improve its logistic management.
Launch Processing System. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.
1976-01-01
This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.
NASA Astrophysics Data System (ADS)
Zaharov, A. A.; Nissenbaum, O. V.; Ponomaryov, K. Y.; Nesgovorov, E. S.
2018-01-01
In this paper we study application of Internet of Thing concept and devices to secure automated process control systems. We review different approaches in IoT (Internet of Things) architecture and design and propose them for several applications in security of automated process control systems. We consider an Attribute-based encryption in context of access control mechanism implementation and promote a secret key distribution scheme between attribute authorities and end devices.
NASA Astrophysics Data System (ADS)
Jamshidieini, Bahman; Fazaee, Reza
2016-05-01
Distribution network components connect machines and other loads to electrical sources. If resistance or current of any component is more than specified range, its temperature may exceed the operational limit which can cause major problems. Therefore, these defects should be found and eliminated according to their severity. Although infra-red cameras have been used for inspection of electrical components, maintenance prioritization of distribution cubicles is mostly based on personal perception and lack of training data prevents engineers from developing image processing methods. New research on the spatial control chart encouraged us to use statistical approaches instead of the pattern recognition for the image processing. In the present study, a new scanning pattern which can tolerate heavy autocorrelation among adjacent pixels within infra-red image was developed and for the first time combination of kernel smoothing, spatial control charts and local robust regression were used for finding defects within heterogeneous infra-red images of old distribution cubicles. This method does not need training data and this advantage is crucially important when the training data is not available.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
NASA Astrophysics Data System (ADS)
Romanosky, Robert R.
2017-05-01
he National Energy Technology Laboratory (NETL) under the Department of Energy (DOE) Fossil Energy (FE) Program is leading the effort to not only develop near zero emission power generation systems, but to increaser the efficiency and availability of current power systems. The overarching goal of the program is to provide clean affordable power using domestic resources. Highly efficient, low emission power systems can have extreme conditions of high temperatures up to 1600 oC, high pressures up to 600 psi, high particulate loadings, and corrosive atmospheres that require monitoring. Sensing in these harsh environments can provide key information that directly impacts process control and system reliability. The lack of suitable measurement technology serves as a driver for the innovations in harsh environment sensor development. Advancements in sensing using optical fibers are key efforts within NETL's sensor development program as these approaches offer the potential to survive and provide critical information about these processes. An overview of the sensor development supported by the National Energy Technology Laboratory (NETL) will be given, including research in the areas of sensor materials, designs, and measurement types. New approaches to intelligent sensing, sensor placement and process control using networked sensors will be discussed as will novel approaches to fiber device design concurrent with materials development research and development in modified and coated silica and sapphire fiber based sensors. The use of these sensors for both single point and distributed measurements of temperature, pressure, strain, and a select suite of gases will be addressed. Additional areas of research includes novel control architecture and communication frameworks, device integration for distributed sensing, and imaging and other novel approaches to monitoring and controlling advanced processes. The close coupling of the sensor program with process modeling and control will be discussed for the overarching goal of clean power production.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
Electric field control in DC cable test termination by nano silicone rubber composite
NASA Astrophysics Data System (ADS)
Song, Shu-Wei; Li, Zhongyuan; Zhao, Hong; Zhang, Peihong; Han, Baozhong; Fu, Mingli; Hou, Shuai
2017-07-01
The electric field distributions in high voltage direct current cable termination are investigated with silicone rubber nanocomposite being the electric stress control insulator. The nanocomposite is composed of silicone rubber, nanoscale carbon black and graphitic carbon. The experimental results show that the physical parameters of the nanocomposite, such as thermal activation energy and nonlinearity-relevant coefficient, can be manipulated by varying the proportion of the nanoscale fillers. The numerical simulation shows that safe electric field distribution calls for certain parametric region of the thermal activation energy and nonlinearity-relevant coefficient. Outside the safe parametric region, local maximum of electric field strength around the stress cone appears in the termination insulator, enhancing the breakdown of the cable termination. In the presence of the temperature gradient, thermal activation energy and nonlinearity-relevant coefficient work as complementary factors to produce a reasonable electric field distribution. The field maximum in the termination insulator show complicate variation in the transient processes. The stationary field distribution favors the increase of the nonlinearity-relevant coefficient; for the transient field distribution in the process of negative lighting impulse, however, an optimized value of the nonlinearity-relevant coefficient is necessary to equalize the electric field in the termination.
Distributed cooperating processes in a mobile robot control system
NASA Technical Reports Server (NTRS)
Skillman, Thomas L., Jr.
1988-01-01
A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.
The Particle Distribution in Liquid Metal with Ceramic Particles Mould Filling Process
NASA Astrophysics Data System (ADS)
Dong, Qi; Xing, Shu-ming
2017-09-01
Adding ceramic particles in the plate hammer is an effective method to increase the wear resistance of the hammer. The liquid phase method is based on the “with the flow of mixed liquid forging composite preparation of ZTA ceramic particle reinforced high chromium cast iron hammer. Preparation method for this system is using CFD simulation analysis the particles distribution of flow mixing and filling process. Taking the 30% volume fraction of ZTA ceramic composite of high chromium cast iron hammer as example, by changing the speed of liquid metal viscosity to control and make reasonable predictions of particles distribution before solidification.
NASA Astrophysics Data System (ADS)
Maaß, Heiko; Cakmak, Hüseyin Kemal; Bach, Felix; Mikut, Ralf; Harrabi, Aymen; Süß, Wolfgang; Jakob, Wilfried; Stucky, Karl-Uwe; Kühnapfel, Uwe G.; Hagenmeyer, Veit
2015-12-01
Power networks will change from a rigid hierarchic architecture to dynamic interconnected smart grids. In traditional power grids, the frequency is the controlled quantity to maintain supply and load power balance. Thereby, high rotating mass inertia ensures for stability. In the future, system stability will have to rely more on real-time measurements and sophisticated control, especially when integrating fluctuating renewable power sources or high-load consumers like electrical vehicles to the low-voltage distribution grid.
Automated Subsystem Control for Life Support System (ASCLSS)
NASA Technical Reports Server (NTRS)
Block, Roger F.
1987-01-01
The Automated Subsystem Control for Life Support Systems (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of space station subsystems. The automation system features a hierarchical and distributed real-time control architecture which places maximum controls authority at the lowest or process control level which enhances system autonomy. The ASCLSS demonstration system pioneered many automation and control concepts currently being considered in the space station data management system (DMS). Heavy emphasis is placed on controls hardware and software commonality implemented in accepted standards. The approach demonstrates successfully the application of real-time process and accountability with the subsystem or process developer. The ASCLSS system completely automates a space station subsystem (air revitalization group of the ASCLSS) which moves the crew/operator into a role of supervisory control authority. The ASCLSS program developed over 50 lessons learned which will aide future space station developers in the area of automation and controls..
Development of extended release dosage forms using non-uniform drug distribution techniques.
Huang, Kuo-Kuang; Wang, Da-Peng; Meng, Chung-Ling
2002-05-01
Development of an extended release oral dosage form for nifedipine using the non-uniform drug distribution matrix method was conducted. The process conducted in a fluid bed processing unit was optimized by controlling the concentration gradient of nifedipine in the coating solution and the spray rate applied to the non-pareil beads. The concentration of nifedipine in the coating was controlled by instantaneous dilutions of coating solution with polymer dispersion transported from another reservoir into the coating solution at a controlled rate. The USP dissolution method equipped with paddles at 100 rpm in 0.1 N hydrochloric acid solution maintained at 37 degrees C was used for the evaluation of release rate characteristics. Results indicated that (1) an increase in the ethyl cellulose content in the coated beads decreased the nifedipine release rate, (2) incorporation of water-soluble sucrose into the formulation increased the release rate of nifedipine, and (3) adjustment of the spray coating solution and the transport rate of polymer dispersion could achieve a dosage form with a zero-order release rate. Since zero-order release rate and constant plasma concentration were achieved in this study using the non-uniform drug distribution technique, further studies to determine in vivo/in vitro correlation with various non-uniform drug distribution dosage forms will be conducted.
Computer-Mediated Group Processes in Distributed Command and Control Systems
1988-06-01
Linville, "Michael J. Liebhaber, and Richard W. Obermayer Vreuls Corporation Jon J. Fallesen Army Research Institute DTIC SELECTEr • AUG I 1. 1988 ARI...control staffs who will operate in a computer- mediated environment. The Army Research Institute has initiated research to examine selected issues...computar-mediated group processes is needed. Procedure: The identification and selection of key research issues followed a three- step procedure. Previous
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROOT, R.W.
1999-05-18
This guide provides the Tank Waste Remediation System Privatization Infrastructure Program management with processes and requirements to appropriately control information and documents in accordance with the Tank Waste Remediation System Configuration Management Plan (Vann 1998b). This includes documents and information created by the program, as well as non-program generated materials submitted to the project. It provides appropriate approval/control, distribution and filing systems.
NASA Technical Reports Server (NTRS)
1985-01-01
Topics addressed include: assessment models; model predictions of ozone changes; ozone and temperature trends; trace gas effects on climate; kinetics and photchemical data base; spectroscopic data base (infrared to microwave); instrument intercomparisons and assessments; and monthly mean distribution of ozone and temperature.
NASA Technical Reports Server (NTRS)
Kipling, Zak; Stier, Philip; Johnson, Colin E.; Mann, Graham W.; Bellouin, Nicolas; Bauer, Susanne E.; Bergman, Tommi; Chin, Mian; Diehl, Thomas; Ghan, Steven J.;
2016-01-01
The vertical profile of aerosol is important for its radiative effects, but weakly constrained by observations on the global scale, and highly variable among different models. To investigate the controlling factors in one particular model, we investigate the effects of individual processes in HadGEM3-UKCA and compare the resulting diversity of aerosol vertical profiles with the inter-model diversity from the AeroCom Phase II control experiment. In this way we show that (in this model at least) the vertical profile is controlled by a relatively small number of processes, although these vary among aerosol components and particle sizes. We also show that sufficiently coarse variations in these processes can produce a similar diversity to that among different models in terms of the global-mean profile and, to a lesser extent, the zonal-mean vertical position. However, there are features of certain models' profiles that cannot be reproduced, suggesting the influence of further structural differences between models. In HadGEM3-UKCA, convective transport is found to be very important in controlling the vertical profile of all aerosol components by mass. In-cloud scavenging is very important for all except mineral dust. Growth by condensation is important for sulfate and carbonaceous aerosol (along with aqueous oxidation for the former and ageing by soluble material for the latter). The vertical extent of biomass-burning emissions into the free troposphere is also important for the profile of carbonaceous aerosol. Boundary-layer mixing plays a dominant role for sea salt and mineral dust, which are emitted only from the surface. Dry deposition and below-cloud scavenging are important for the profile of mineral dust only. In this model, the microphysical processes of nucleation, condensation and coagulation dominate the vertical profile of the smallest particles by number (e.g. total CN >3 nm), while the profiles of larger particles (e.g. CN>100 nm) are controlled by the same processes as the component mass profiles, plus the size distribution of primary emissions. We also show that the processes that affect the AOD-normalised radiative forcing in the model are predominantly those that affect the vertical mass distribution, in particular convective transport, in-cloud scavenging, aqueous oxidation, ageing and the vertical extent of biomass-burning emissions.
Off-the-shelf Control of Data Analysis Software
NASA Astrophysics Data System (ADS)
Wampler, S.
The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.
Distribution of iron, copper and manganese in the Arabian Sea
NASA Astrophysics Data System (ADS)
Moffett, James
2014-05-01
The distribution of iron, copper and manganese was studied on a zonal transect of the Arabian Sea during the SW monsoon in 2007. The distribution of metals in the eastern and western ends of the transect are completely different, with concentrations of Fe and Mn higher in the east, but copper much higher in the west. Redox cycling in the east, and enhanced ventilation in the west contributes to these processes. It seems likely that blooms of Phaeocystis sp. contribute to the pronounced surface depletion and oxicline regeneration we observe, particularly for copper. The results are very different than similar surveys in the Peru upwelling, indicating controls by very different processes. These results have important implications for carbon and nitrogen cycling, particularly for processes mediated by key Cu and Fe metalloenzymes.
Dynamic measurement of fluorescent proteins spectral distribution on virus infected cells
NASA Astrophysics Data System (ADS)
Lee, Ja-Yun; Wu, Ming-Xiu; Kao, Chia-Yun; Wu, Tzong-Yuan; Hsu, I.-Jen
2006-09-01
We constructed a dynamic spectroscopy system that can simultaneously measure the intensity and spectral distributions of samples with multi-fluorophores in a single scan. The system was used to monitor the fluorescence distribution of cells infected by the virus, which is constructed by a recombinant baculoviruses, vAcD-Rhir-E, containing the red and green fluorescent protein gene that can simultaneously produce dual fluorescence in recombinant virus-infected Spodoptera frugiperda 21 cells (Sf21) under the control of a polyhedrin promoter. The system was composed of an excitation light source, a scanning system and a spectrometer. We also developed an algorithm and fitting process to analyze the pattern of fluorescence distribution of the dual fluorescence produced in the recombinant virus-infected cells. All the algorithm and calculation are automatically processed in a visualized scanning program and can monitor the specific region of sample by calculating its intensity distribution. The spectral measurement of each pixel was performed at millisecond range and the two dimensional distribution of full spectrum was recorded within several seconds. We have constructed a dynamic spectroscopy system to monitor the process of virus-infection of cells. The distributions of the dual fluorescence were simultaneously measured at micrometer resolution.
Trickling Filters. Student Manual. Biological Treatment Process Control.
ERIC Educational Resources Information Center
Richwine, Reynold D.
The textual material for a unit on trickling filters is presented in this student manual. Topic areas discussed include: (1) trickling filter process components (preliminary treatment, media, underdrain system, distribution system, ventilation, and secondary clarifier); (2) operational modes (standard rate filters, high rate filters, roughing…
Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation
NASA Astrophysics Data System (ADS)
Stephenson, Jerry L.; Kapraun, Chris
Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.
Using fuzzy rule-based knowledge model for optimum plating conditions search
NASA Astrophysics Data System (ADS)
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Nadgorny, Milena; Gentekos, Dillon T; Xiao, Zeyun; Singleton, S Parker; Fors, Brett P; Connal, Luke A
2017-10-01
Molecular weight and dispersity (Ð) influence physical and rheological properties of polymers, which are of significant importance in polymer processing technologies. However, these parameters provide only partial information about the precise composition of polymers, which is reflected by the shape and symmetry of molecular weight distribution (MWD). In this work, the effect of MWD symmetry on thermal and rheological properties of polymers with identical molecular weights and Ð is demonstrated. Remarkably, when the MWD is skewed to higher molecular weight, a higher glass transition temperature (T g ), increased stiffness, increased thermal stability, and higher apparent viscosities are observed. These observed differences are attributed to the chain length composition of the polymers, easily controlled by the synthetic strategy. This work demonstrates a versatile approach to engineer the properties of polymers using controlled synthesis to skew the shape of MWD. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Distributed Scene Analysis For Autonomous Road Vehicle Guidance
NASA Astrophysics Data System (ADS)
Mysliwetz, Birger D.; Dickmanns, E. D.
1987-01-01
An efficient distributed processing scheme has been developed for visual road boundary tracking by 'VaMoRs', a testbed vehicle for autonomous mobility and computer vision. Ongoing work described here is directed to improving the robustness of the road boundary detection process in the presence of shadows, ill-defined edges and other disturbing real world effects. The system structure and the techniques applied for real-time scene analysis are presented along with experimental results. All subfunctions of road boundary detection for vehicle guidance, such as edge extraction, feature aggregation and camera pointing control, are executed in parallel by an onboard multiprocessor system. On the image processing level local oriented edge extraction is performed in multiple 'windows', tighly controlled from a hierarchically higher, modelbased level. The interpretation process involving a geometric road model and the observer's relative position to the road boundaries is capable of coping with ambiguity in measurement data. By using only selected measurements to update the model parameters even high noise levels can be dealt with and misleading edges be rejected.
The BaBar Data Reconstruction Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceseracciu, A
2005-04-20
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less
The BaBar Data Reconstruction Control System
NASA Astrophysics Data System (ADS)
Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.
2005-08-01
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.
Zhang, Jianyi; Pei, Chunlei; Schiano, Serena; Heaps, David; Wu, Chuan-Yu
2016-09-01
Roll compaction is a commonly used dry granulation process in pharmaceutical, fine chemical and agrochemical industries for materials sensitive to heat or moisture. The ribbon density distribution plays an important role in controlling properties of granules (e.g. granule size distribution, porosity and strength). Accurate characterisation of ribbon density distribution is critical in process control and quality assurance. The terahertz imaging system has a great application potential in achieving this as the terahertz radiation has the ability to penetrate most of the pharmaceutical excipients and the refractive index reflects variations in density and chemical compositions. The aim of this study is to explore whether terahertz pulse imaging is a feasible technique for quantifying ribbon density distribution. Ribbons were made of two grades of microcrystalline cellulose (MCC), Avicel PH102 and DG, using a roll compactor at various process conditions and the ribbon density variation was investigated using terahertz imaging and section methods. The density variations obtained from both methods were compared to explore the reliability and accuracy of the terahertz imaging system. An average refractive index is calculated from the refractive index values in the frequency range between 0.5 and 1.5THz. It is shown that the refractive index gradually decreases from the middle of the ribbon towards to the edges. Variations of density distribution across the width of the ribbons are also obtained using both the section method and the terahertz imaging system. It is found that the terahertz imaging results are in excellent agreement with that obtained using the section method, demonstrating that terahertz imaging is a feasible and rapid tool to characterise ribbon density distributions. Copyright © 2016 Elsevier B.V. All rights reserved.
Gundersen, H J; Seefeldt, T; Osterby, R
1980-01-01
The width of individual glomerular epithelial foot processes appears very different on electron micrographs. A method for obtainining distributions of the true width of foot processes from that of their apparent width on electron micrographs has been developed based on geometric probability theory pertaining to a specific geometric model. Analyses of foot process width in humans and rats show a remarkable interindividual invariance implying rigid control and therefore great biological significance of foot process width or a derivative thereof. The very low inter-individual variation of the true width, shown in the present paper, makes it possible to demonstrate slight changes in rather small groups of patients or experimental animals.
ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,
INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS
A Framework for WWW Query Processing
NASA Technical Reports Server (NTRS)
Wu, Binghui Helen; Wharton, Stephen (Technical Monitor)
2000-01-01
Query processing is the most common operation in a DBMS. Sophisticated query processing has been mainly targeted at a single enterprise environment providing centralized control over data and metadata. Submitting queries by anonymous users on the web is different in such a way that load balancing or DBMS' accessing control becomes the key issue. This paper provides a solution by introducing a framework for WWW query processing. The success of this framework lies in the utilization of query optimization techniques and the ontological approach. This methodology has proved to be cost effective at the NASA Goddard Space Flight Center Distributed Active Archive Center (GDAAC).
A network control concept for the 30/20 GHz communication system baseband processor
NASA Technical Reports Server (NTRS)
Sabourin, D. J.; Hay, R. E.
1982-01-01
The architecture and system design for a satellite-switched TDMA communication system employing on-board processing was developed by Motorola for NASA's Lewis Research Center. The system design is based on distributed processing techniques that provide extreme flexibility in the selection of a network control protocol without impacting the satellite or ground terminal hardware. A network control concept that includes system synchronization and allows burst synchronization to occur within the system operational requirement is described. This concept integrates the tracking and control links with the communication links via the baseband processor, resulting in an autonomous system operational approach.
NASA Astrophysics Data System (ADS)
Peng, Yong; Li, Hongqiang; Shen, Chunlong; Guo, Shun; Zhou, Qi; Wang, Kehong
2017-06-01
The power density distribution of electron beam welding (EBW) is a key factor to reflect the beam quality. The beam quality test system was designed for the actual beam power density distribution of high-voltage EBW. After the analysis of characteristics and phase relationship between the deflection control signal and the acquisition signal, the Post-Trigger mode was proposed for the signal acquisition meanwhile the same external clock source was shared by the control signal and the sampling clock. The power density distribution of beam cross-section was reconstructed using one-dimensional signal that was processed by median filtering, twice signal segmentation and spatial scale calibration. The diameter of beam cross-section was defined by amplitude method and integral method respectively. The measured diameter of integral definition is bigger than that of amplitude definition, but for the ideal distribution the former is smaller than the latter. The measured distribution without symmetrical shape is not concentrated compared to Gaussian distribution.
Adopting Industry Standards for Control Systems Within Advanced Life Support
NASA Technical Reports Server (NTRS)
Young, James Scott; Boulanger, Richard
2002-01-01
This paper gives a description of OPC (Object Linking and Embedding for Process Control) standards for process control and outlines the experiences at JSC with using these standards to interface with I/O hardware from three independent vendors. The I/O hardware was integrated with a commercially available SCADA/HMI software package to make up the control and monitoring system for the Environmental Systems Test Stand (ESTS). OPC standards were utilized for communicating with I/O hardware and the software was used for implementing monitoring, PC-based distributed control, and redundant data storage over an Ethernet physical layer using an embedded din-rail mounted PC.
NASA Astrophysics Data System (ADS)
Kondo, Takahiro; Ohta, Masayuki; Ito, Tsuyohito; Okada, Shigefumi
2013-09-01
Effects of a rotating magnetic field (RMF) on the electron energy distribution function (EEDF) and on the electron density are investigated with the aim of controlling the radical composition of inductively coupled plasmas. By adjusting the RMF frequency and generation power, the desired electron density and electron energy shift are obtained. Consequently, the amount and fraction of high-energy electrons, which are mostly responsible for direct dissociation processes of raw molecules, will be controlled externally. This controllability, with no electrode exposed to plasma, will enable us to control radical components and their flux during plasma processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Alfonsi; C. Rabiti; D. Mandelli
The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less
Distributed control using linear momentum exchange devices
NASA Technical Reports Server (NTRS)
Sharkey, J. P.; Waites, Henry; Doane, G. B., III
1987-01-01
MSFC has successfully employed the use of the Vibrational Control of Space Structures (VCOSS) Linear Momentum Exchange Devices (LMEDs), which was an outgrowth of the Air Force Wright Aeronautical Laboratory (AFWAL) program, in a distributed control experiment. The control experiment was conducted in MSFC's Ground Facility for Large Space Structures Control Verification (GF/LSSCV). The GF/LSSCV's test article was well suited for this experiment in that the LMED could be judiciously placed on the ASTROMAST. The LMED placements were such that vibrational mode information could be extracted from the accelerometers on the LMED. The LMED accelerometer information was processed by the control algorithms so that the LMED masses could be accelerated to produce forces which would dampen the vibrational modes of interest. Experimental results are presented showing the LMED's capabilities.
Kakish, Hanan F; Tashtoush, Bassam; Ibrahim, Hussein G; Najib, Naji M
2002-07-01
In this investigation, modified-release dosage forms of diltiazem HCl (DT) and diclofenac sodium (DS) were prepared. The development work comprised two main parts: (a) loading the drug into ethylene vinyl acetate (EVA) polymer, and (b) generation of a non-uniform concentration distribution of the drug within the polymer matrix. Phase separation technique was successfully used to load DT and DS into the polymer at significantly high levels, up to 81 and 76%, respectively. Size diameter of the resultant microspheres was between 1.6 and 2.0mm. Controlled-extraction of loaded microspheres and high vacuum freeze-drying were used to generate the non-uniform concentration distribution and to immobilize the new drug distribution within the matrix. Parameters controlling the different processes were investigated, and hence optimal processing conditions were used to prepare the dosage forms. Rates of drug release from the two dosage forms in water and in media having different pH were found to be constant for an appreciable length of time (>8h) followed by a slow decline; a characteristic of a non-Fickian diffusion process. Scanning electron microscopy studies suggested that the resultant release behavior was the outcome of the combined effects of the non-uniform distribution of the drug in the matrix and the apparent changes in the pores and surface characteristics of the microspheres. Comparison of release rate-time plots of dissolution data of marketed products with the newly developed dosage forms indicated the ability of the latter to sustain more zero order release.
Factors controlling the regional distribution of vanadium in ground water
Wright, Michael T.; Belitz, Kenneth
2010-01-01
Although the ingestion of vanadium (V) in drinking water may have possible adverse health effects, there have been relatively few studies of V in groundwater. Given the importance of groundwater as a source of drinking water in many areas of the world, this study examines the potential sources and geochemical processes that control the distribution of V in groundwater on a regional scale. Potential sources of V to groundwater include dissolution of V rich rocks, and waste streams from industrial processes. Geochemical processes such as adsorption/desorption, precipitation/dissolution, and chemical transformations control V concentrations in groundwater. Based on thermodynamic data and laboratory studies, V concentrations are expected to be highest in samples collected from oxic and alkaline groundwater. However, the extent to which thermodynamic data and laboratory results apply to the actual distribution of V in groundwater is not well understood. More than 8400 groundwater samples collected in California were used in this study. Of these samples, high (> or = 50 μg/L) and moderate (25 to 49 μg/L) V concentrations were most frequently detected in regions where both source rock and favorable geochemical conditions occurred. The distribution of V concentrations in groundwater samples suggests that significant sources of V are mafic and andesitic rock. Anthropogenic activities do not appear to be a significant contributor of V to groundwater in this study. High V concentrations in groundwater samples analyzed in this study were almost always associated with oxic and alkaline groundwater conditions, which is consistent with predictions based on thermodynamic data.
Using OPC technology to support the study of advanced process control.
Mahmoud, Magdi S; Sabih, Muhammad; Elshafei, Moustafa
2015-03-01
OPC, originally the Object Linking and Embedding (OLE) for Process Control, brings a broad communication opportunity between different kinds of control systems. This paper investigates the use of OPC technology for the study of distributed control systems (DCS) as a cost effective and flexible research tool for the development and testing of advanced process control (APC) techniques in university research centers. Co-Simulation environment based on Matlab, LabVIEW and TCP/IP network is presented here. Several implementation issues and OPC based client/server control application have been addressed for TCP/IP network. A nonlinear boiler model is simulated as OPC server and OPC client is used for closed loop model identification, and to design a Model Predictive Controller. The MPC is able to control the NOx emissions in addition to drum water level and steam pressure. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Control of galactosylated glycoforms distribution in cell culture system.
McCracken, Neil A; Kowle, Ronald; Ouyang, Anli
2014-01-01
Cell culture process conditions including media components and bioreactor operation conditions have a profound impact on recombinant protein quality attributes. Considerable changes in the distribution of galactosylated glycoforms (G0F, G1F, and G2F) were observed across multiple CHO derived recombinant proteins in development at Eli Lilly and Company when switching to a new chemically defined (CD) media platform condition. In the new CD platform, significantly lower G0F percentages and higher G1F and G2F were observed. These changes were of interest as glycosylation heterogeneity can impact the effectiveness of a protein. A systematic investigation was done to understand the root cause of the change and control strategy for galactosylated glycoforms distribution. It was found that changes in asparagine concentration could result in a corresponding change in G0F, G1F, and G2F distribution. A follow-up study examined a wider range of asparagine concentration and it was found that G0F, G1F, and G2F percentage could be titrated by adjusting asparagine concentration. The observed changes in heterogeneity from changing asparagine concentration are due to resulting changes in ammonium metabolism. Further study ascertained that different integrated ammonium level during the cell culture process could control G0F, G1F, and G2F percentage distribution. A mechanism hypothesis is proposed that integrated ammonium level impacts intracellular pH, which further regulates β-1, 4 galactosyltransferase activity. © 2014 American Institute of Chemical Engineers.
One Way of Testing a Distributed Processor
NASA Technical Reports Server (NTRS)
Edstrom, R.; Kleckner, D.
1982-01-01
Launch processing for Space Shuttle is checked out, controlled, and monitored with new system. Entire system can be exercised by two computer programs--one in master console and other in each of operations consoles. Control program in each operations console detects change in status and begins task initiation. All of front-end processors are exercised from consoles through common data buffer, and all data are logged to processed-data recorder for posttest analysis.
NASA Astrophysics Data System (ADS)
Esquivel-Gómez, Jose de Jesus; Barajas-Ramírez, Juan Gonzalo
2018-01-01
One of the most effective mechanisms to contain the spread of an infectious disease through a population is the implementation of quarantine policies. However, its efficiency is affected by different aspects, for example, the structure of the underlining social network where highly connected individuals are more likely to become infected; therefore, the speed of the transmission of the decease is directly determined by the degree distribution of the network. Another aspect that influences the effectiveness of the quarantine is the self-protection processes of the individuals in the population, that is, they try to avoid contact with potentially infected individuals. In this paper, we investigate the efficiency of quarantine and self-protection processes in preventing the spreading of infectious diseases over complex networks with a power-law degree distribution [ P ( k ) ˜ k - ν ] for different ν values. We propose two alternative scale-free models that result in power-law degree distributions above and below the exponent ν = 3 associated with the conventional Barabási-Albert model. Our results show that the exponent ν determines the effectiveness of these policies in controlling the spreading process. More precisely, we show that for the ν exponent below three, the quarantine mechanism loses effectiveness. However, the efficiency is improved if the quarantine is jointly implemented with a self-protection process driving the number of infected individuals significantly lower.
Multi-kw dc power distribution system study program
NASA Technical Reports Server (NTRS)
Berkery, E. A.; Krausz, A.
1974-01-01
The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.
A new taxonomy for distributed computer systems based upon operating system structure
NASA Technical Reports Server (NTRS)
Foudriat, E. C.
1985-01-01
Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.
NASA Technical Reports Server (NTRS)
2001-01-01
REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.
Saulnier, George E; Castro, Janna C; Cook, Curtiss B
2014-05-01
Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box-Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. © 2014 Diabetes Technology Society.
Saulnier, George E.; Castro, Janna C.
2014-01-01
Glucose control can be problematic in critically ill patients. We evaluated the impact of statistical transformation on interpretation of intensive care unit inpatient glucose control data. Point-of-care blood glucose (POC-BG) data derived from patients in the intensive care unit for 2011 was obtained. Box–Cox transformation of POC-BG measurements was performed, and distribution of data was determined before and after transformation. Different data subsets were used to establish statistical upper and lower control limits. Exponentially weighted moving average (EWMA) control charts constructed from April, October, and November data determined whether out-of-control events could be identified differently in transformed versus nontransformed data. A total of 8679 POC-BG values were analyzed. POC-BG distributions in nontransformed data were skewed but approached normality after transformation. EWMA control charts revealed differences in projected detection of out-of-control events. In April, an out-of-control process resulting in the lower control limit being exceeded was identified at sample 116 in nontransformed data but not in transformed data. October transformed data detected an out-of-control process exceeding the upper control limit at sample 27 that was not detected in nontransformed data. Nontransformed November results remained in control, but transformation identified an out-of-control event less than 10 samples into the observation period. Using statistical methods to assess population-based glucose control in the intensive care unit could alter conclusions about the effectiveness of care processes for managing hyperglycemia. Further study is required to determine whether transformed versus nontransformed data change clinical decisions about the interpretation of care or intervention results. PMID:24876620
76 FR 74753 - Authority To Manufacture and Distribute Postage Evidencing Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
... revision of the rules governing the inventory control processes of Postage Evidencing Systems (PES... destruction or disposal of all Postage Evidencing Systems and their components to enable accurate accounting...) Postage Evidencing System repair process--any physical or electronic access to the internal components of...
An Industry Viewpoint on Electron Energy Distribution Function Control
NASA Astrophysics Data System (ADS)
Ventzek, Peter
2011-10-01
It is trite to note that plasmas play a key role in industrial technology. Lighting, laser, film coating and now medical technology require plasma science for their sustenance. One field stands out by virtue of its economic girth and impact. Semiconductor manufacturing and process science enabling its decades of innovation owe significant debt to progress in low temperature plasma science. Today, technology requires atomic level control from plasmas. Mere layers of atoms delineate good and bad device performance. While plasma sources meet nanoscale specifications over 100s cm scale dimensions, achieving atomic level control from plasmas is hindered by the absence of direct control of species velocity distribution functions. EEDF control translates to precise control of species flux and velocities at surfaces adjacent to the plasma. Electron energy distribution function (eedf) control is a challenge that, if successfully met, will have a huge impact on nanoscale device manufacturing. This lunchtime talk will attempt to provide context to the research advances presented at this Workshop. Touched on will be areas of new opportunity and the risks associated with missing these opportunities.
Industrial application of thermal image processing and thermal control
NASA Astrophysics Data System (ADS)
Kong, Lingxue
2001-09-01
Industrial application of infrared thermography is virtually boundless as it can be used in any situations where there are temperature differences. This technology has particularly been widely used in automotive industry for process evaluation and system design. In this work, thermal image processing technique will be introduced to quantitatively calculate the heat stored in a warm/hot object and consequently, a thermal control system will be proposed to accurately and actively manage the thermal distribution within the object in accordance with the heat calculated from the thermal images.
Developing an Integration Infrastructure for Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan
2014-01-01
Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.
Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E
2018-02-01
In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.
Decentralized Adaptive Control For Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Precise knowledge of dynamics not required. Proposed scheme for control of multijointed robotic manipulator calls for independent control subsystem for each joint, consisting of proportional/integral/derivative feedback controller and position/velocity/acceleration feedforward controller, both with adjustable gains. Independent joint controller compensates for unpredictable effects, gravitation, and dynamic coupling between motions of joints, while forcing joints to track reference trajectories. Scheme amenable to parallel processing in distributed computing system wherein each joint controlled by relatively simple algorithm on dedicated microprocessor.
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2013-03-01
We present a statistical model for the distribution of Chinese names. Both family names and given names are studied on the same basis. With naive expectation, the distribution of family names can be very different from that of given names. One is affected mostly by genealogy, while the other can be dominated by cultural effects. However, we find that both distributions can be well described by the same model. Various scaling behaviors can be understood as a result of stochastic processes. The exponents of different power-law distributions are controlled by a single parameter. We also comment on the significance of full-name repetition in Chinese population.
Coordinating complex problem-solving among distributed intelligent agents
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1992-01-01
A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.
Integrating security in a group oriented distributed system
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth; Gong, LI
1992-01-01
A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.
Software techniques for a distributed real-time processing system. [for spacecraft
NASA Technical Reports Server (NTRS)
Lesh, F.; Lecoq, P.
1976-01-01
The paper describes software techniques developed for the Unified Data System (UDS), a distributed processor network for control and data handling onboard a planetary spacecraft. These techniques include a structured language for specifying the programs contained in each module, and a small executive program in each module which performs scheduling and implements the module task.
Interdependent networks: the fragility of control
Morris, Richard G.; Barthelemy, Marc
2013-01-01
Recent work in the area of interdependent networks has focused on interactions between two systems of the same type. However, an important and ubiquitous class of systems are those involving monitoring and control, an example of interdependence between processes that are very different. In this Article, we introduce a framework for modelling ‘distributed supervisory control' in the guise of an electrical network supervised by a distributed system of control devices. The system is characterised by degrees of freedom salient to real-world systems— namely, the number of control devices, their inherent reliability, and the topology of the control network. Surprisingly, the behavior of the system depends crucially on the reliability of control devices. When devices are completely reliable, cascade sizes are percolation controlled; the number of devices being the relevant parameter. For unreliable devices, the topology of the control network is important and can dramatically reduce the resilience of the system. PMID:24067404
A social-level macro-governance mode for collaborative manufacturing processes
NASA Astrophysics Data System (ADS)
Gao, Ji; Lv, Hexin; Jin, Zhiyong; Xu, Ping
2017-08-01
This paper proposes the social-level macro-governance mode for innovating the popular centralized control for CoM (Collaborative Manufacturing) processes, and makes this mode depend on the support from three aspects of technologies standalone and complementary: social-level CoM process norms, CoM process supervision system, and rational agents as the brokers of enterprises. It is the close coupling of those technologies that redounds to removing effectively the uncontrollability obstacle confronted with by cross-management-domain CoM processes. As a result, this mode enables CoM applications to be implemented by uniting the centralized control of CoM partners for respective CoM activities, and therefore provides a new distributed CoM process control mode to push forward the convenient development and large-scale deployment of SME-oriented CoM applications.
Influence of the baking process for chemically amplified resist on CD performance
NASA Astrophysics Data System (ADS)
Sasaki, Shiho; Ohfuji, Takeshi; Kurihara, Masa-aki; Inomata, Hiroyuki; Jackson, Curt A.; Murata, Yoshio; Totsukawa, Daisuke; Tsugama, Naoko; Kitano, Naoki; Hayashi, Naoya; Hwang, David H.
2002-12-01
CD uniformity and MTT (Mean to Target) control are very important in mask production for the 90nm node and beyond. Although it is well known that baking temperatures influence CD control in the CAR (chemically amplified resist) process for mask patterning, we found that 2 other process factors, which are related to acid diffusion and CA- reaction, greatly affect CD performance. We used a commercially available, negative CAR material and a 50kV exposure tool. We focused on the baking process for both PB (Pre Baking) and PEB (Post Exposure Bake). Film densification strength was evaluated from film thickness loss during PB. Plate temperature distribution was monitored with a thermocouple plate and IR camera. CA-reactions were also monitored with in-situ FTIR during PEB. CD uniformity was used to define the process influence. In conclusion, we found that airflow control and ramping temperature control in the baking process are very important factors to control CD in addition to conventional temperature control. These improvements contributed to a 30 % of reduction in CD variation.
McDonald, Carrie R.; Thesen, Thomas; Hagler, Donald J.; Carlson, Chad; Devinksy, Orrin; Kuzniecky, Rubin; Barr, William; Gharapetian, Lusineh; Trongnetrpunya, Amy; Dale, Anders M.; Halgren, Eric
2009-01-01
Purpose To examine distributed patterns of language processing in healthy controls and patients with epilepsy using magnetoencephalography (MEG), and to evaluate the concordance between laterality of distributed MEG sources and language laterality as determined by the intracarotid amobarbitol procedure (IAP). Methods MEG was performed in ten healthy controls using an anatomically-constrained, noise-normalized distributed source solution (dSPM). Distributed source modeling of language was then applied to eight patients with intractable epilepsy. Average source strengths within temporoparietal and frontal lobe regions of interest (ROIs) were calculated and the laterality of activity within ROIs during discrete time windows was compared to results from the IAP. Results In healthy controls, dSPM revealed activity in visual cortex bilaterally from ~80-120ms in response to novel words and sensory control stimuli (i.e., false fonts). Activity then spread to fusiform cortex ~160-200ms, and was dominated by left hemisphere activity in response to novel words. From ~240-450ms, novel words produced activity that was left-lateralized in frontal and temporal lobe regions, including anterior and inferior temporal, temporal pole, and pars opercularis, as well as bilaterally in posterior superior temporal cortex. Analysis of patient data with dSPM demonstrated that from 350-450ms, laterality of temporoparietal sources agreed with the IAP 75% of the time, whereas laterality of frontal MEG sources agreed with the IAP in all eight patients. Discussion Our results reveal that dSPM can unveil the timing and spatial extent of language processes in patients with epilepsy and may enhance knowledge of language lateralization and localization for use in preoperative planning. PMID:19552656
NASA Astrophysics Data System (ADS)
Gedalin, M.; Liverts, M.; Balikhin, M. A.
2008-05-01
Field-aligned and gyrophase bunched ion beams are observed in the foreshock of the Earth bow shock. One of the mechanisms proposed for their production is non-specular reflection at the shock front. We study the distributions which are formed at the stationary quasi-perpendicular shock front within the same process which is responsible for the generation of reflected ions and transmitted gyrating ions. The test particle motion analysis in a model shock allows one to identify the parameters which control the efficiency of the process and the features of the escaping ion distribution. These parameters are: the angle between the shock normal and the upstream magnetic field, the ratio of the ion thermal velocity to the flow velocity upstream, and the cross-shock potential. A typical distribution of escaping ions exhibits a bimodal pitch angle distribution (in the plasma rest frame).
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Computer Sciences and Data Systems, volume 2
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886
NASA Technical Reports Server (NTRS)
Koeberlein, Ernest, III; Pender, Shaw Exum
1994-01-01
This paper describes the Multimission Telemetry Visualization (MTV) data acquisition/distribution system. MTV was developed by JPL's Multimedia Communications Laboratory (MCL) and designed to process and display digital, real-time, science and engineering data from JPL's Mission Control Center. The MTV system can be accessed using UNIX workstations and PC's over common datacom and telecom networks from worldwide locations. It is designed to lower data distribution costs while increasing data analysis functionality by integrating low-cost, off-the-shelf desktop hardware and software. MTV is expected to significantly lower the cost of real-time data display, processing, distribution, and allow for greater spacecraft safety and mission data access.
242A Distributed Control System Year 2000 Acceptance Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
TEATS, M.C.
1999-08-31
This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less
State estimation for distributed systems with sensing delay
NASA Astrophysics Data System (ADS)
Alexander, Harold L.
1991-08-01
Control of complex systems such as remote robotic vehicles requires combining data from many sensors where the data may often be delayed by sensory processing requirements. The number and variety of sensors make it desirable to distribute the computational burden of sensing and estimation among multiple processors. Classic Kalman filters do not lend themselves to distributed implementations or delayed measurement data. The alternative Kalman filter designs presented in this paper are adapted for delays in sensor data generation and for distribution of computation for sensing and estimation over a set of networked processors.
Mercury's complex exosphere: results from MESSENGER's third flyby.
Vervack, Ronald J; McClintock, William E; Killen, Rosemary M; Sprague, Ann L; Anderson, Brian J; Burger, Matthew H; Bradley, E Todd; Mouawad, Nelly; Solomon, Sean C; Izenberg, Noam R
2010-08-06
During MESSENGER's third flyby of Mercury, the Mercury Atmospheric and Surface Composition Spectrometer detected emission from ionized calcium concentrated 1 to 2 Mercury radii tailward of the planet. This measurement provides evidence for tailward magnetospheric convection of photoions produced inside the magnetosphere. Observations of neutral sodium, calcium, and magnesium above the planet's north and south poles reveal altitude distributions that are distinct for each species. A two-component sodium distribution and markedly different magnesium distributions above the two poles are direct indications that multiple processes control the distribution of even single species in Mercury's exosphere.
Bober, David B.; Kumar, Mukal; Rupert, Timothy J.; ...
2015-12-28
Nanocrystalline materials are defined by their fine grain size, but details of the grain boundary character distribution should also be important. Grain boundary character distributions are reported for ball-milled, sputter-deposited, and electrodeposited Ni and Ni-based alloys, all with average grain sizes of ~20 nm, to study the influence of processing route. The two deposited materials had nearly identical grain boundary character distributions, both marked by a Σ3 length percentage of 23 to 25 pct. In contrast, the ball-milled material had only 3 pct Σ3-type grain boundaries and a large fraction of low-angle boundaries (16 pct), with the remainder being predominantlymore » random high angle (73 pct). Furthermore, these grain boundary character measurements are connected to the physical events that control their respective processing routes. Consequences for material properties are also discussed with a focus on nanocrystalline corrosion. As a whole, the results presented here show that grain boundary character distribution, which has often been overlooked in nanocrystalline metals, can vary significantly and influence material properties in profound ways.« less
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
21 CFR 172.177 - Sodium nitrite used in processing smoked chub.
Code of Federal Regulations, 2011 CFR
2011-04-01
... be heated by a controlled heat process which provides a monitoring system positioned in as many... subsequent storage and distribution. All shipping containers, retail packages, and shipping records shall...) The label and labeling of the additive container shall bear, in addition to the other information...
21 CFR 172.177 - Sodium nitrite used in processing smoked chub.
Code of Federal Regulations, 2012 CFR
2012-04-01
... be heated by a controlled heat process which provides a monitoring system positioned in as many... subsequent storage and distribution. All shipping containers, retail packages, and shipping records shall...) The label and labeling of the additive container shall bear, in addition to the other information...
21 CFR 172.177 - Sodium nitrite used in processing smoked chub.
Code of Federal Regulations, 2013 CFR
2013-04-01
... be heated by a controlled heat process which provides a monitoring system positioned in as many... subsequent storage and distribution. All shipping containers, retail packages, and shipping records shall...) The label and labeling of the additive container shall bear, in addition to the other information...
Development of a process control computer device for the adaptation of flexible wind tunnel walls
NASA Technical Reports Server (NTRS)
Barg, J.
1982-01-01
In wind tunnel tests, the problems arise of determining the wall pressure distribution, calculating the wall contour, and controlling adjustment of the walls. This report shows how these problems have been solved for the high speed wind tunnel of the Technical University of Berlin.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Indiva: a middleware for managing distributed media environment
NASA Astrophysics Data System (ADS)
Ooi, Wei-Tsang; Pletcher, Peter; Rowe, Lawrence A.
2003-12-01
This paper presents a unified set of abstractions and operations for hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.
Correlation signatures of wet soils and snows. [algorithm development and computer programming
NASA Technical Reports Server (NTRS)
Phillips, M. R.
1972-01-01
Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Distributed computing testbed for a remote experimental environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butner, D.N.; Casper, T.A.; Howard, B.C.
1995-09-18
Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less
Public authority control strategy for opinion evolution in social networks
NASA Astrophysics Data System (ADS)
Chen, Xi; Xiong, Xi; Zhang, Minghong; Li, Wei
2016-08-01
This paper addresses the need to deal with and control public opinion and rumors. Existing strategies to control public opinion include degree, random, and adaptive bridge control strategies. In this paper, we use the HK model to present a public opinion control strategy based on public authority (PA). This means utilizing the influence of expert or high authority individuals whose opinions we control to obtain the optimum effect in the shortest time possible and thus reach a consensus of public opinion. Public authority (PA) is only influenced by individuals' attributes (age, economic status, and education level) and not their degree distribution; hence, in this paper, we assume that PA complies with two types of public authority distribution (normal and power-law). According to the proposed control strategy, our experiment is based on random, degree, and public authority control strategies in three different social networks (small-world, scale-free, and random) and we compare and analyze the strategies in terms of convergence time (T), final number of controlled agents (C), and comprehensive efficiency (E). We find that different network topologies and the distribution of the PA in the network can influence the final controlling effect. While the effect of PA strategy differs in different network topology structures, all structures achieve comprehensive efficiency with any kind of public authority distribution in any network. Our findings are consistent with several current sociological phenomena and show that in the process of public opinion/rumor control, considerable attention should be paid to high authority individuals.
Public authority control strategy for opinion evolution in social networks.
Chen, Xi; Xiong, Xi; Zhang, Minghong; Li, Wei
2016-08-01
This paper addresses the need to deal with and control public opinion and rumors. Existing strategies to control public opinion include degree, random, and adaptive bridge control strategies. In this paper, we use the HK model to present a public opinion control strategy based on public authority (PA). This means utilizing the influence of expert or high authority individuals whose opinions we control to obtain the optimum effect in the shortest time possible and thus reach a consensus of public opinion. Public authority (PA) is only influenced by individuals' attributes (age, economic status, and education level) and not their degree distribution; hence, in this paper, we assume that PA complies with two types of public authority distribution (normal and power-law). According to the proposed control strategy, our experiment is based on random, degree, and public authority control strategies in three different social networks (small-world, scale-free, and random) and we compare and analyze the strategies in terms of convergence time (T), final number of controlled agents (C), and comprehensive efficiency (E). We find that different network topologies and the distribution of the PA in the network can influence the final controlling effect. While the effect of PA strategy differs in different network topology structures, all structures achieve comprehensive efficiency with any kind of public authority distribution in any network. Our findings are consistent with several current sociological phenomena and show that in the process of public opinion/rumor control, considerable attention should be paid to high authority individuals.
Process control and dosimetry in a multipurpose irradiation facility
NASA Astrophysics Data System (ADS)
Cabalfin, E. G.; Lanuza, L. G.; Solomon, H. M.
1999-08-01
Availability of the multipurpose irradiation facility at the Philippine Nuclear Research Institute has encouraged several local industries to use gamma radiation for sterilization or decontamination of various products. Prior to routine processing, dose distribution studies are undertaken for each product and product geometry. During routine irradiation, dosimeters are placed at the minimum and maximum dose positions of a process load.
2015-03-24
distribution is unlimited. . Interactions Between Structure and Processing that Control Moisture Uptake in High-Performance Polycyanurates Presenter: Dr...Edwards AFB, CA 4 California State University, Long Beach, CA 90840 2 Outline: Basic Studies of Moisture Uptake in Cyanate Ester Networks • Background...Motivation • SOTA Theories of Moisture Uptake in Thermosetting Networks • New Tools and New Discoveries • Unresolved Issues and Ways to Address Them
Investigation of multidimensional control systems in the state space and wavelet medium
NASA Astrophysics Data System (ADS)
Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.
2018-05-01
The notions are introduced of “one-dimensional-point” and “multidimensional-point” automatic control systems. To demonstrate the joint use of approaches based on the concepts of state space and wavelet transforms, a method for optimal control in a state space medium represented in the form of time-frequency representations (maps), is considered. The computer-aided control system is formed on the basis of the similarity transformation method, which makes it possible to exclude the use of reduced state variable observers. 1D-material flow signals formed by primary transducers are converted by means of wavelet transformations into multidimensional concentrated-at-a point variables in the form of time-frequency distributions of Cohen’s class. The algorithm for synthesizing a stationary controller for feeding processes is given here. The conclusion is made that the formation of an optimal control law with time-frequency distributions available contributes to the improvement of transient processes quality in feeding subsystems and the mixing unit. Confirming the efficiency of the method presented is illustrated by an example of the current registration of material flows in the multi-feeding unit. The first section in your paper.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
Hsu, Ya-Chu; Hung, Yu-Chen; Wang, Chiu-Yen
2017-09-15
High uniformity Au-catalyzed indium selenide (In 2 Se 3) nanowires are grown with the rapid thermal annealing (RTA) treatment via the vapor-liquid-solid (VLS) mechanism. The diameters of Au-catalyzed In 2 Se 3 nanowires could be controlled with varied thicknesses of Au films, and the uniformity of nanowires is improved via a fast pre-annealing rate, 100 °C/s. Comparing with the slower heating rate, 0.1 °C/s, the average diameters and distributions (standard deviation, SD) of In 2 Se 3 nanowires with and without the RTA process are 97.14 ± 22.95 nm (23.63%) and 119.06 ± 48.75 nm (40.95%), respectively. The in situ annealing TEM is used to study the effect of heating rate on the formation of Au nanoparticles from the as-deposited Au film. The results demonstrate that the average diameters and distributions of Au nanoparticles with and without the RTA process are 19.84 ± 5.96 nm (30.00%) and about 22.06 ± 9.00 nm (40.80%), respectively. It proves that the diameter size, distribution, and uniformity of Au-catalyzed In 2 Se 3 nanowires are reduced and improved via the RTA pre-treated. The systemic study could help to control the size distribution of other nanomaterials through tuning the annealing rate, temperatures of precursor, and growth substrate to control the size distribution of other nanomaterials. Graphical Abstract Rapid thermal annealing (RTA) process proved that it can uniform the size distribution of Au nanoparticles, and then it can be used to grow the high uniformity Au-catalyzed In 2 Se 3 nanowires via the vapor-liquid-solid (VLS) mechanism. Comparing with the general growth condition, the heating rate is slow, 0.1 °C/s, and the growth temperature is a relatively high growth temperature, > 650 °C. RTA pre-treated growth substrate can form smaller and uniform Au nanoparticles to react with the In 2 Se 3 vapor and produce the high uniformity In 2 Se 3 nanowires. The in situ annealing TEM is used to realize the effect of heating rate on Au nanoparticle formation from the as-deposited Au film. The byproduct of self-catalyzed In 2 Se 3 nanoplates can be inhibited by lowering the precursors and growth temperatures.
Serial Interface through Stream Protocol on EPICS Platform for Distributed Control and Monitoring
NASA Astrophysics Data System (ADS)
Das Gupta, Arnab; Srivastava, Amit K.; Sunil, S.; Khan, Ziauddin
2017-04-01
Remote operation of any equipment or device is implemented in distributed systems in order to control and proper monitoring of process values. For such remote operations, Experimental Physics and Industrial Control System (EPICS) is used as one of the important software tool for control and monitoring of a wide range of scientific parameters. A hardware interface is developed for implementation of EPICS software so that different equipment such as data converters, power supplies, pump controllers etc. could be remotely operated through stream protocol. EPICS base was setup on windows as well as Linux operating system for control and monitoring while EPICS modules such as asyn and stream device were used to interface the equipment with standard RS-232/RS-485 protocol. Stream Device protocol communicates with the serial line with an interface to asyn drivers. Graphical user interface and alarm handling were implemented with Motif Editor and Display Manager (MEDM) and Alarm Handler (ALH) command line channel access utility tools. This paper will describe the developed application which was tested with different equipment and devices serially interfaced to the PCs on a distributed network.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
NASA Astrophysics Data System (ADS)
Chi, Cheng; Liu, Chi-Chun; Meli, Luciana; Guo, Jing; Parnell, Doni; Mignot, Yann; Schmidt, Kristin; Sanchez, Martha; Farrell, Richard; Singh, Lovejeet; Furukawa, Tsuyoshi; Lai, Kafai; Xu, Yongan; Sanders, Daniel; Hetzer, David; Metz, Andrew; Burns, Sean; Felix, Nelson; Arnold, John; Corliss, Daniel
2017-03-01
In this study, the integrity and the benefits of the DSA shrink process were verified through a via-chain test structure, which was fabricated by either DSA or baseline litho/etch process for via layer formation while metal layer processes remain the same. The nearest distance between the vias in this test structure is below 60nm, therefore, the following process components were included: 1) lamella-forming BCP for forming self-aligned via (SAV), 2) EUV printed guiding pattern, and 3) PS-philic sidewall. The local CDU (LCDU) of minor axis was improved by 30% after DSA shrink process. We compared two DSA Via shrink processes and a DSA_Control process, in which guiding patterns (GP) were directly transferred to the bottom OPL without DSA shrink. The DSA_Control apparently resulted in larger CD, thus, showed much higher open current and shorted the dense via chains. The non-optimized DSA shrink process showed much broader current distribution than the improved DSA shrink process, which we attributed to distortion and dislocation of the vias and ineffective SAV. Furthermore, preliminary defectivity study of our latest DSA process showed that the primary defect mode is likely to be etch-related. The challenges, strategies applied to improve local CD uniformity and electrical current distribution, and potential adjustments were also discussed.
Laser cutting: industrial relevance, process optimization, and laser safety
NASA Astrophysics Data System (ADS)
Haferkamp, Heinz; Goede, Martin; von Busse, Alexander; Thuerk, Oliver
1998-09-01
Compared to other technological relevant laser machining processes, up to now laser cutting is the application most frequently used. With respect to the large amount of possible fields of application and the variety of different materials that can be machined, this technology has reached a stable position within the world market of material processing. Reachable machining quality for laser beam cutting is influenced by various laser and process parameters. Process integrated quality techniques have to be applied to ensure high-quality products and a cost effective use of the laser manufacturing plant. Therefore, rugged and versatile online process monitoring techniques at an affordable price would be desirable. Methods for the characterization of single plant components (e.g. laser source and optical path) have to be substituted by an omnivalent control system, capable of process data acquisition and analysis as well as the automatic adaptation of machining and laser parameters to changes in process and ambient conditions. At the Laser Zentrum Hannover eV, locally highly resolved thermographic measurements of the temperature distribution within the processing zone using cost effective measuring devices are performed. Characteristic values for cutting quality and plunge control as well as for the optimization of the surface roughness at the cutting edges can be deducted from the spatial distribution of the temperature field and the measured temperature gradients. Main influencing parameters on the temperature characteristic within the cutting zone are the laser beam intensity and pulse duration in pulse operation mode. For continuous operation mode, the temperature distribution is mainly determined by the laser output power related to the cutting velocity. With higher cutting velocities temperatures at the cutting front increase, reaching their maximum at the optimum cutting velocity. Here absorption of the incident laser radiation is drastically increased due to the angle between the normal of the cutting front and the laser beam axis. Beneath process optimization and control further work is focused on the characterization of particulate and gaseous laser generated air contaminants and adequate safety precautions like exhaust and filter systems.
Automated information and control complex of hydro-gas endogenous mine processes
NASA Astrophysics Data System (ADS)
Davkaev, K. S.; Lyakhovets, M. V.; Gulevich, T. M.; Zolin, K. A.
2017-09-01
The automated information and control complex designed to prevent accidents, related to aerological situation in the underground workings, accounting of the received and handed over individual devices, transmission and display of measurement data, and the formation of preemptive solutions is considered. Examples for the automated workplace of an airgas control operator by individual means are given. The statistical characteristics of field data characterizing the aerological situation in the mine are obtained. The conducted studies of statistical characteristics confirm the feasibility of creating a subsystem of controlled gas distribution with an adaptive arrangement of points for gas control. The adaptive (multivariant) algorithm for processing measuring information of continuous multidimensional quantities and influencing factors has been developed.
Automation in the Space Station module power management and distribution Breadboard
NASA Technical Reports Server (NTRS)
Walls, Bryan; Lollar, Louis F.
1990-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.
Continuous-variable quantum key distribution in uniform fast-fading channels
NASA Astrophysics Data System (ADS)
Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano
2018-03-01
We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process.
Microeconomics of advanced process window control for 50-nm gates
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chen, Xuemei; Falessi, Georges; Garvin, Craig; Hankinson, Matt; Lev, Amir; Levy, Ady; Slessor, Michael D.
2002-07-01
Fundamentally, advanced process control enables accelerated design-rule reduction, but simple microeconomic models that directly link the effects of advanced process control to profitability are rare or non-existent. In this work, we derive these links using a simplified model for the rate of profit generated by the semiconductor manufacturing process. We use it to explain why and how microprocessor manufacturers strive to avoid commoditization by producing only the number of dies required to satisfy the time-varying demand in each performance segment. This strategy is realized using the tactic known as speed binning, the deliberate creation of an unnatural distribution of microprocessor performance that varies according to market demand. We show that the ability of APC to achieve these economic objectives may be limited by variability in the larger manufacturing context, including measurement delays and process window variation.
40 CFR 761.378 - Decontamination, reuse, and disposal of solvents, cleaners, and equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Double Wash/Rinse Method for Decontaminating Non...
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Variety of Sedimentary Process and Distribution of Tsunami Deposits in Laboratory Experiments
NASA Astrophysics Data System (ADS)
Yamaguchi, N.; Sekiguchi, T.
2017-12-01
As an indicator of the history and magnitude of paleotsunami events, tsunami deposits have received considerable attention. To improve the identification and interpretation of paleotsunami deposits, an understanding of sedimentary process and distribution of tsunami deposits is crucial. Recent detailed surveys of onshore tsunami deposits including the 2004 Indian Ocean tsunami and the 2011 Tohoku-oki tsunami have revealed that terrestrial topography causes a variety of their features and distributions. Therefore, a better understanding of possible sedimentary process and distribution on such influential topographies is required. Flume experiments, in which sedimentary conditions can be easily controlled, can provide insights into the effects of terrestrial topography as well as tsunami magnitude on the feature of tsunami deposits. In this presentation, we report laboratory experiments that focused on terrestrial topography including a water body (e.g. coastal lake) on a coastal lowland and a cliff. In both cases, the results suggested relationship between the distribution of tsunami deposits and the hydraulic condition of the tsunami flow associated with the terrestrial topography. These experiments suggest that influential topography would enhance the variability in thickness of tsunami deposits, and thus, in reconstructions of paleotsunami events using sedimentary records, we should take into account such anomalous distribution of tsunami deposits. Further examination of the temporal sequence of sedimentary process in laboratory tsunamis may improve interpretation and estimation of paleotsunami events.
Kipling, Zak; Stier, Philip; Johnson, Colin E.; ...
2016-02-26
The vertical profile of aerosol is important for its radiative effects, but weakly constrained by observations on the global scale, and highly variable among different models. To investigate the controlling factors in one particular model, we investigate the effects of individual processes in HadGEM3–UKCA and compare the resulting diversity of aerosol vertical profiles with the inter-model diversity from the AeroCom Phase II control experiment. In this way we show that (in this model at least) the vertical profile is controlled by a relatively small number of processes, although these vary among aerosol components and particle sizes. We also show that sufficientlymore » coarse variations in these processes can produce a similar diversity to that among different models in terms of the global-mean profile and, to a lesser extent, the zonal-mean vertical position. However, there are features of certain models' profiles that cannot be reproduced, suggesting the influence of further structural differences between models. In HadGEM3–UKCA, convective transport is found to be very important in controlling the vertical profile of all aerosol components by mass. In-cloud scavenging is very important for all except mineral dust. Growth by condensation is important for sulfate and carbonaceous aerosol (along with aqueous oxidation for the former and ageing by soluble material for the latter). The vertical extent of biomass-burning emissions into the free troposphere is also important for the profile of carbonaceous aerosol. Boundary-layer mixing plays a dominant role for sea salt and mineral dust, which are emitted only from the surface. Dry deposition and below-cloud scavenging are important for the profile of mineral dust only. In this model, the microphysical processes of nucleation, condensation and coagulation dominate the vertical profile of the smallest particles by number (e.g. total CN > 3 nm), while the profiles of larger particles (e.g. CN > 100 nm) are controlled by the same processes as the component mass profiles, plus the size distribution of primary emissions. Here, we also show that the processes that affect the AOD-normalised radiative forcing in the model are predominantly those that affect the vertical mass distribution, in particular convective transport, in-cloud scavenging, aqueous oxidation, ageing and the vertical extent of biomass-burning emissions.« less
DAVE: A plug and play model for distributed multimedia application development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mines, R.F.; Friesen, J.A.; Yang, C.L.
1994-07-01
This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less
Aspects of droplet and particle size control in miniemulsions
NASA Astrophysics Data System (ADS)
Saygi-Arslan, Oznur
Miniemulsion polymerization has become increasingly popular among researchers since it can provide significant advantages over conventional emulsion polymerization in certain cases, such as production of high-solids, low-viscosity latexes with better stability and polymerization of highly water-insoluble monomers. Miniemulsions are relatively stable oil (e.g., monomer) droplets, which can range in size from 50 to 500 nm, and are normally dispersed in an aqueous phase with the aid of a surfactant and a costabilizer. These droplets are the primary locus of the initiation of the polymerization reaction. Since particle formation takes place in the monomer droplets, theoretically, in miniemulsion systems the final particle size can be controlled by the initial droplet size. The miniemulsion preparation process typically generates broad droplet size distributions and there is no complete treatment in the literature regarding the control of the mean droplet size or size distribution. This research aims to control the miniemulsion droplet size and its distribution. In situ emulsification, where the surfactant is synthesized spontaneously at the oil/water interface, has been put forth as a simpler method for the preparation of miniemulsions-like systems. Using the in situ method of preparation, emulsion stability and droplet and particle sizes were monitored and compared with conventional emulsions and miniemulsions. Styrene emulsions prepared by the in situ method do not demonstrate the stability of a comparable miniemulsion. Upon polymerization, the final particle size generated from the in situ emulsion did not differ significantly from the comparable conventional emulsion polymerization; the reaction mechanism for in situ emulsions is more like conventional emulsion polymerization rather than miniemulsion polymerization. Similar results were found when the in situ method was applied to controlled free radical polymerizations (CFRP), which have been advanced as a potential application of the method. Molecular weight control was found to be achieved via diffusion of the CFRP agents through the aqueous phase owing to limited water solubilities. The effects of adsorption rate and energy on the droplet size and size distribution of miniemulsions using different surfactants (sodium lauryl sulfate (SLS), sodium dodecylbenzene sulfonate (SDBS), Dowfax 2A1, Aerosol OT-75PG, sodium n-octyl sulfate (SOS), and sodium n-hexadecyl sulfate (SHS)) were analyzed. For this purpose, first, the dynamics of surfactant adsorption at an oil/water interface were examined over a range of surfactant concentrations by the drop volume method and then adsorption rates of the different surfactants were determined for the early stages of adsorption. The results do not show a direct relationship between adsorption rate and miniemulsion droplet size and size distribution. Adsorption energies of these surfactants were also calculated by the Langmuir adsorption isotherm equation and no correlation between adsorption energy and miniemulsion droplet size was found. In order to understand the mechanism of miniemulsification process, the effects of breakage and coalescence processes on droplet size distributions were observed at different surfactant concentrations, monomer ratios, and homogenization conditions. A coalescence and breakup mechanism for miniemulsification is proposed to explain the size distribution of droplets. The multimodal droplet size distribution of ODMA miniemulsions was controlled by the breakage mechanism. The results also showed that, at a surfactant concentration when 100% surface coverage was obtained, the droplet size distribution became unimodal.
Ultrasonically controlled particle size distribution of explosives: a safe method.
Patil, Mohan Narayan; Gore, G M; Pandit, Aniruddha B
2008-03-01
Size reduction of the high energy materials (HEM's) by conventional methods (mechanical means) is not safe as they are very sensitive to friction and impact. Modified crystallization techniques can be used for the same purpose. The solute is dissolved in the solvent and crystallized via cooling or is precipitated out using an antisolvent. The various crystallization parameters such as temperature, antisolvent addition rate and agitation are adjusted to get the required final crystal size and morphology. The solvent-antisolvent ratio, time of crystallization and yield of the product are the key factors for controlling antisolvent based precipitation process. The advantages of cavitationally induced nucleation can be coupled with the conventional crystallization process. This study includes the effect of the ultrasonically generated acoustic cavitation phenomenon on the solvent antisolvent based precipitation process. CL20, a high-energy explosive compound, is a polyazapolycyclic caged polynitramine. CL-20 has greater energy output than existing (in-use) energetic ingredients while having an acceptable level of insensitivity to shock and other external stimuli. The size control and size distribution manipulation of the high energy material (CL20) has been successfully carried out safely and quickly along with an increase in the final mass yield, compared to the conventional antisolvent based precipitation process.
Application of new type of distributed multimedia databases to networked electronic museum
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1999-01-01
Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.
Mission Assurance Analysis Protocol (MAAP): Assessing Risk in Complex Environments
2005-09-01
5 1.7 Focus on Risk .................................................................................. 6 2 Defining Risk ...20 CMU/SEI-2005-TN-032 4.4 Extrinsic and Intrinsic Risk ............................................................. 21 5 Operational Risk in...Section 5 , "Operational Risk in Distributed Processes," we look at the characteristics of operational risk in processes where management control is
First CLIPS Conference Proceedings, volume 2
NASA Technical Reports Server (NTRS)
1990-01-01
The topics of volume 2 of First CLIPS Conference are associated with following applications: quality control; intelligent data bases and networks; Space Station Freedom; Space Shuttle and satellite; user interface; artificial neural systems and fuzzy logic; parallel and distributed processing; enchancements to CLIPS; aerospace; simulation and defense; advisory systems and tutors; and intelligent control.
ERIC Educational Resources Information Center
Ball, B. Hunter; Brewer, Gene A.
2018-01-01
The present study implemented an individual differences approach in conjunction with response time (RT) variability and distribution modeling techniques to better characterize the cognitive control dynamics underlying ongoing task cost (i.e., slowing) and cue detection in event-based prospective memory (PM). Three experiments assessed the relation…
Ultrafast dynamics in atomic clusters: Analysis and control
Bonačić-Koutecký, Vlasta; Mitrić, Roland; Werner, Ute; Wöste, Ludger; Berry, R. Stephen
2006-01-01
We present a study of dynamics and ultrafast observables in the frame of pump–probe negative-to-neutral-to-positive ion (NeNePo) spectroscopy illustrated by the examples of bimetallic trimers Ag2Au−/Ag2Au/Ag2Au+ and silver oxides Ag3O2−/Ag3O2/Ag3O2+ in the context of cluster reactivity. First principle multistate adiabatic dynamics allows us to determine time scales of different ultrafast processes and conditions under which these processes can be experimentally observed. Furthermore, we present a strategy for optimal pump–dump control in complex systems based on the ab initio Wigner distribution approach and apply it to tailor laser fields for selective control of the isomerization process in Na3F2. The shapes of pulses can be assigned to underlying processes, and therefore control can be used as a tool for analysis. PMID:16740664
Ultrafast dynamics in atomic clusters: analysis and control.
Bonacić-Koutecký, Vlasta; Mitrić, Roland; Werner, Ute; Wöste, Ludger; Berry, R Stephen
2006-07-11
We present a study of dynamics and ultrafast observables in the frame of pump-probe negative-to-neutral-to-positive ion (NeNePo) spectroscopy illustrated by the examples of bimetallic trimers Ag2Au-/Ag2Au/Ag2Au+ and silver oxides Ag3O2-/Ag3O2/Ag3O2+ in the context of cluster reactivity. First principle multistate adiabatic dynamics allows us to determine time scales of different ultrafast processes and conditions under which these processes can be experimentally observed. Furthermore, we present a strategy for optimal pump-dump control in complex systems based on the ab initio Wigner distribution approach and apply it to tailor laser fields for selective control of the isomerization process in Na3F2. The shapes of pulses can be assigned to underlying processes, and therefore control can be used as a tool for analysis.
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is moved through Kennedy Space Center’s Space Station Processing Facility (SSPF) toward the workstand where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is lowered into its workstand at Kennedy Space Center’s Space Station Processing Facility (SSPF), where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
1983-10-01
an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm
2003-06-26
VANDENBERG AIR FORCE BASE, CALIF. - At Vandenberg Air Force Base, Calif., the Pegasus launch vehicle is moved toward its hangar. The Pegasus will carry the SciSat-1 spacecraft in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
NASA Technical Reports Server (NTRS)
Mejzak, R. S.
1980-01-01
The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware.
Damon, Stephen M; Boyd, Brian D; Plassard, Andrew J; Taylor, Warren; Landman, Bennett A
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - the next generation: towards one million processes on commodity hardware
NASA Astrophysics Data System (ADS)
Damon, Stephen M.; Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-03-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware
Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner. PMID:28919661
NASA Astrophysics Data System (ADS)
Chupina, K. V.; Kataev, E. V.; Khannanov, A. M.; Korshunov, V. N.; Sennikov, I. A.
2018-05-01
The paper is devoted to a problem of synthesis of the robust control system for a distributed parameters plant. The vessel descent-rise device has a heave compensation function for stabilization of the towed underwater vehicle on a set depth. A sea state code, parameters of the underwater vehicle and cable vary during underwater operations, the vessel heave is a stochastic process. It means that the plant and external disturbances have uncertainty. That is why it is necessary to use the robust theory for synthesis of an automatic control system, but without use of traditional methods of optimization, because this cable has distributed parameters. The offered technique has allowed one to design an effective control system for stabilization of immersion depth of the towed underwater vehicle for various degrees of sea roughness and to provide its robustness to deviations of parameters of the vehicle and cable’s length.
Fermilab Muon Campus g-2 Cryogenic Distribution Remote Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, L.; Theilacker, J.; Klebaner, A.
2015-11-05
The Muon Campus (MC) is able to measure Muon g-2 with high precision and comparing its value to the theoretical prediction. The MC has four 300 KW screw compressors and four liquid helium refrigerators. The centerpiece of the Muon g-2 experiment at Fermilab is a large, 50-foot-diameter superconducting muon storage ring. This one-of-a-kind ring, made of steel, aluminum and superconducting wire, was built for the previous g-2 experiment at Brookhaven. Due to each subsystem has to be far away from each other and be placed in the distant location, therefore, Siemens Process Control System PCS7-400, Automation Direct DL205 & DL05more » PLC, Synoptic and Fermilab ACNET HMI are the ideal choices as the MC g-2 cryogenic distribution real-time and on-Line remote control system. This paper presents a method which has been successfully used by many Fermilab distribution cryogenic real-time and On-Line remote control systems.« less
A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)
NASA Technical Reports Server (NTRS)
Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.
2007-01-01
This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Mafalda T., E-mail: mafaldatcosta@gmail.com; Carolino, Elisabete, E-mail: lizcarolino@gmail.com; Oliveira, Teresa A., E-mail: teresa.oliveira@uab.pt
In water supply systems with distribution networkthe most critical aspects of control and Monitoring of water quality, which generates crises system, are the effects of cross-contamination originated by the network typology. The classics of control of quality systems through the application of Shewhart charts are generally difficult to manage in real time due to the high number of charts that must be completed and evaluated. As an alternative to the traditional control systems with Shewhart charts, this study aimed to apply a simplified methodology of a monitoring plan quality parameters in a drinking water distribution, by applying Hotelling’s T{sup 2}more » charts and supplemented with Shewhart charts with Bonferroni limits system, whenever instabilities with processes were detected.« less
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
Distributed Autonomous Control Action Based on Sensor and Mission Fusion
2005-09-01
programmable control algorithm driven by the readings of two pressure switch sensors located on either side of the valve unit. Thus, a micro-controller...and Characterization The process of leak detection and characterization must be accomplished with a set of pressure switch sensors. This sensor...economically supplementing existing widely used pressure switch type sensors which are characterized by prohibitively long inertial lag responses
NASA Astrophysics Data System (ADS)
Shenfeld, Ofer; Belotserkovsky, Edward; Goldwasser, Benad; Zur, Albert; Katzir, Abraham
1993-02-01
The heating of tissue by microwave radiation has attained a place of importance in various medical fields, such as the treatment of malignancies, urinary retention, and hypothermia. Accurate temperature measurements in these treated tissues is important for treatment planning and for the control of the heating process. It is also important to be able to measure spacial temperature distribution in the tissues because they are heated in a nonuniform way by the microwave radiation. Conventional temperature sensors used today are inaccurate in the presence of microwave radiation and require contact with the heated tissue. Fiber optic radiometry makes it possible to measure temperatures accurately in the presence of microwave radiation and does not require contact with the tissue. Accurate temperature measurements of tissues heated by microwave was obtained using a silver halide optic radiometer, enabling control of the heating process in other regions of the tissue samples. Temperature mappings of the heated tissues were performed and the nonuniform temperature distributions in these tissues was demonstrated.
Advanced Map For Real-Time Process Control
NASA Astrophysics Data System (ADS)
Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto
1987-10-01
MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.
Controllable laser thermal cleavage of sapphire wafers
NASA Astrophysics Data System (ADS)
Xu, Jiayu; Hu, Hong; Zhuang, Changhui; Ma, Guodong; Han, Junlong; Lei, Yulin
2018-03-01
Laser processing of substrates for light-emitting diodes (LEDs) offers advantages over other processing techniques and is therefore an active research area in both industrial and academic sectors. The processing of sapphire wafers is problematic because sapphire is a hard and brittle material. Semiconductor laser scribing processing suffers certain disadvantages that have yet to be overcome, thereby necessitating further investigation. In this work, a platform for controllable laser thermal cleavage was constructed. A sapphire LED wafer was modeled using the finite element method to simulate the thermal and stress distributions under different conditions. A guide groove cut by laser ablation before the cleavage process was observed to guide the crack extension and avoid deviation. The surface and cross section of sapphire wafers processed using controllable laser thermal cleavage were characterized by scanning electron microscopy and optical microscopy, and their morphology was compared to that of wafers processed using stealth dicing. The differences in luminous efficiency between substrates prepared using these two processing methods are explained.
Knowledge-based processing for aircraft flight control
NASA Technical Reports Server (NTRS)
Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul
1994-01-01
This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.
OVERVIEW: CCL PATHOGENS RESEARCH AT NRMRL
The Microbial Contaminants Control Branch (MCCB), Water Supply and Water Resources Division, National Risk Management Research Laboratory, conducts research on microbiological problems associated with source water quality, treatment processes, distribution and storage of drin...
Linkages and feedbacks in orogenic systems: An introduction
Thigpen, J. Ryan; Law, Richard D.; Merschat, Arthur J.; Stowell, Harold
2017-01-01
Orogenic processes operate at scales ranging from the lithosphere to grain-scale, and are inexorably linked. For example, in many orogens, fault and shear zone architecture controls distribution of heat advection along faults and also acts as the primary mechanism for redistribution of heat-producing material. This sets up the thermal structure of the orogen, which in turn controls lithospheric rheology, the nature and distribution of deformation and strain localization, and ultimately, through localized mechanical strengthening and weakening, the fundamental shape of the developing orogenic wedge (Fig. 1). Strain localization establishes shear zone and fault geometry, and it is the motion on these structures, in conjunction with climate, that often focuses erosional and exhumational processes. This climatic focusing effect can even drive development of asymmetry at the scale of the entire wedge (Willett et al., 1993).
Microbial facies distribution and its geological and geochemical controls at the Hanford 300 area
NASA Astrophysics Data System (ADS)
Hou, Z.; Nelson, W.; Stegen, J.; Murray, C. J.; Arntzen, E.
2015-12-01
Efforts have been made by various scientific disciplines to study hyporheic zones and characterize their associated processes. One way to approach the study of the hyporheic zone is to define facies, which are elements of a (hydrobio) geologic classification scheme that groups components of a complex system with high variability into a manageable set of discrete classes. In this study, we try to classify the hyporheic zone based on the geology, geochemistry, microbiology, and understand their interactive influences on the integrated biogeochemical distributions and processes. A number of measurements have been taken for 21 freeze core samples along the Columbia River bank in the Hanford 300 Area, and unique datasets have been obtained on biomass, pH, number of microbial taxa, percentage of N/C/H/S, microbial activity parameters, as well as microbial community attributes/modules. In order to gain a complete understanding of the geological control on these variables and processes, the explanatory variables are set to include quantitative gravel/sand/mud/silt/clay percentages, statistical moments of grain size distributions, as well as geological (e.g., Folk-Wentworth) and statistical (e.g., hierarchical) clusters. The dominant factors for major microbial and geochemical variables are identified and summarized using exploratory data analysis approaches (e.g., principal component analysis, hierarchical clustering, factor analysis, multivariate analysis of variance). The feasibility of extending the facies definition and its control of microbial and geochemical properties to larger scales is discussed.
Magma Vesiculation and Infrasonic Activity in Open Conduit Volcanoes
NASA Astrophysics Data System (ADS)
Colo', L.; Baker, D. R.; Polacci, M.; Ripepe, M.
2007-12-01
At persistently active basaltic volcanoes such as Stromboli, Italy degassing of the magma column can occur in "passive" and "active" conditions. Passive degassing is generally understood as a continuous, non explosive release of gas mainly from the open summit vents and subordinately from the conduit's wall or from fumaroles. In passive degassing generally gas is in equilibrium with atmospheric pressure, while in active degassing the gas approaches the surface at overpressurized conditions. During active degassing (or puffing), the magma column is interested by the bursting of small gas bubbles at the magma free surface and, as a consequence, the active degassing process generates infrasonic signals. We postulated, in this study, that the rate and the amplitude of infrasonic activity is somehow linked to the rate and the volume of the overpressured gas bubbles, which are generated in the magma column. Our hypothesis is that infrasound is controlled by the quantities of gas exsolved in the magma column and then, that a relationship between infrasound and the vesiculation process should exist. In order to achieve this goal, infrasonic records and bubble size distributions of scoria samples from normal explosive activity at Stromboli processed via X ray tomography have been compared. We observed that the cumulative distribution for both data sets follow similar power laws, indicating that both processes are controlled by a scale invariant phenomenon. However the power law is not stable but changes in different scoria clasts, reflecting when gas bubble nucleation is predominant over bubbles coalescence and viceversa. The power law also changes for the infrasonic activity from time to time, suggesting that infrasound may be controlled also by a different gas exsolution within the magma column. Changes in power law distributions are the same for infrasound and scoria indicating that they are linked to the same process acting in the magmatic system. We suggest that monitoring infrasound on an active volcano could represent an alternative way to monitor the vesiculation process of an open conduit system.
Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S
2018-09-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.
Distributed digital signal processors for multi-body structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K.
1990-01-01
Several digital filter designs were investigated which may be used to process sensor data from large space structures and to design digital hardware to implement the distributed signal processing architecture. Several experimental tests articles are available at NASA Langley Research Center to evaluate these designs. A summary of some of the digital filter designs is presented, an evaluation of their characteristics relative to control design is discussed, and candidate hardware microcontroller/microcomputer components are given. Future activities include software evaluation of the digital filter designs and actual hardware inplementation of some of the signal processor algorithms on an experimental testbed at NASA Langley.
Research into software executives for space operations support
NASA Technical Reports Server (NTRS)
Collier, Mark D.
1990-01-01
Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.
Upconversion-based receivers for quantum hacking-resistant quantum key distribution
NASA Astrophysics Data System (ADS)
Jain, Nitin; Kanter, Gregory S.
2016-07-01
We propose a novel upconversion (sum frequency generation)-based quantum-optical system design that can be employed as a receiver (Bob) in practical quantum key distribution systems. The pump governing the upconversion process is produced and utilized inside the physical receiver, making its access or control unrealistic for an external adversary (Eve). This pump facilitates several properties which permit Bob to define and control the modes that can participate in the quantum measurement. Furthermore, by manipulating and monitoring the characteristics of the pump pulses, Bob can detect a wide range of quantum hacking attacks launched by Eve.
NASA Astrophysics Data System (ADS)
Vinci, Francesco; Iannace, Alessandro; Parente, Mariano; Pirmez, Carlos; Torrieri, Stefano; Giorgioni, Maurizio
2017-12-01
A multidisciplinary study of the dolomitized bodies present in the Lower Cretaceous platform carbonates of Mt. Faito (Southern Apennines - Italy) was carried out in order to explore the connection between early dolomite formation and fluctuating climate conditions. The Berriasian-Aptian investigated succession is 466 m thick and mainly consists of shallow-water lagoonal limestones with frequent dolomite caps. The dolomitization intensity varies along the succession and reaches its peak in the upper Hauterivian-lower Barremian interval, where it is present a completely dolomitized interval about 100-m-thick. Field relations, petrography, mineralogy, and geochemistry of the analyzed dolomite bodies allowed identifying two populations of early dolomites, a fine-medium crystalline (FMdol) and a coarse crystalline dolomite (Cdol), both interpreted as the product of mesohaline water reflux. According to our interpretation, FMdol precipitated from concentrated brines in the very early stage of the reflux process, producing typical sedimentary features as dolomite caps. In the successive step of the process, the basin-ward 'latent' reflux precipitated Cdol from less concentrated brines. A peculiar feature of the studied succession is the great consistency between stratigraphic distribution of dolomite bodies and their geochemical signature. The completely dolomitized Hauterivian-Barremian interval, in fact, is characterized by geochemical values suggesting an origin from distinctly saltier brines. Considering that the observed near-surface dolomitization process is controlled by physical and chemical parameters reflecting the paleoenvironmental and paleoclimatic conditions during dolomite formation, we propose that the stratigraphically controlled dolomitization intensity reflects periodic fluctuations in the salinity of dolomitizing fluid, in turn controlled by long-term climate oscillations. The present work highlights that the stratigraphic distribution of early diagenetic dolomite may be used as proxy to define the climatic fluctuations that have influenced the sedimentary dynamics in the Early Cretaceous. Moreover, considering that a comparable early dolomite distribution is present also in the Dinaric Platform, we suggest that a regional scale climate control acted on early dolomite formation and distribution. Refining the knowledge of such a key control may have a significative impact on hydrocarbon reservoir characterization and exploration in the Periadriatic area.
Paret, Christian; Zähringer, Jenny; Ruf, Matthias; Gerchen, Martin Fungisai; Mall, Stephanie; Hendler, Talma; Schmahl, Christian; Ende, Gabriele
2018-03-30
Brain-computer interfaces provide conscious access to neural activity by means of brain-derived feedback ("neurofeedback"). An individual's abilities to monitor and control feedback are two necessary processes for effective neurofeedback therapy, yet their underlying functional neuroanatomy is still being debated. In this study, healthy subjects received visual feedback from their amygdala response to negative pictures. Activation and functional connectivity were analyzed to disentangle the role of brain regions in different processes. Feedback monitoring was mapped to the thalamus, ventromedial prefrontal cortex (vmPFC), ventral striatum (VS), and rostral PFC. The VS responded to feedback corresponding to instructions while rPFC activity differentiated between conditions and predicted amygdala regulation. Control involved the lateral PFC, anterior cingulate, and insula. Monitoring and control activity overlapped in the VS and thalamus. Extending current neural models of neurofeedback, this study introduces monitoring and control of feedback as anatomically dissociated processes, and suggests their important role in voluntary neuromodulation. © 2018 Wiley Periodicals, Inc.
Model-Unified Planning and Execution for Distributed Autonomous System Control
NASA Technical Reports Server (NTRS)
Aschwanden, Pascal; Baskaran, Vijay; Bernardini, Sara; Fry, Chuck; Moreno, Maria; Muscettola, Nicola; Plaunt, Chris; Rijsman, David; Tompkins, Paul
2006-01-01
The Intelligent Distributed Execution Architecture (IDEA) is a real-time architecture that exploits artificial intelligence planning as the core reasoning engine for interacting autonomous agents. Rather than enforcing separate deliberation and execution layers, IDEA unifies them under a single planning technology. Deliberative and reactive planners reason about and act according to a single representation of the past, present and future domain state. The domain state behaves the rules dictated by a declarative model of the subsystem to be controlled, internal processes of the IDEA controller, and interactions with other agents. We present IDEA concepts - modeling, the IDEA core architecture, the unification of deliberation and reaction under planning - and illustrate its use in a simple example. Finally, we present several real-world applications of IDEA, and compare IDEA to other high-level control approaches.
Distributed intelligent control and status networking
NASA Technical Reports Server (NTRS)
Fortin, Andre; Patel, Manoj
1993-01-01
Over the past two years, the Network Control Systems Branch (Code 532) has been investigating control and status networking technologies. These emerging technologies use distributed processing over a network to accomplish a particular custom task. These networks consist of small intelligent 'nodes' that perform simple tasks. Containing simple, inexpensive hardware and software, these nodes can be easily developed and maintained. Once networked, the nodes can perform a complex operation without a central host. This type of system provides an alternative to more complex control and status systems which require a central computer. This paper will provide some background and discuss some applications of this technology. It will also demonstrate the suitability of one particular technology for the Space Network (SN) and discuss the prototyping activities of Code 532 utilizing this technology.
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao
2018-06-01
This paper presents a distributed control law with disturbance observer for the autonomous assembly of a fleet of flexible spacecraft to construct a large flexible space structure. The fleet of flexible spacecraft is driven to the pre-assembly configuration firstly, and then to the desired assembly configuration. A distributed assembly control law with disturbance observer is proposed by treating the flexible dynamics as disturbances acting on the rigid motion of the flexible spacecraft. Theoretical analysis shows that the control law can actuate the fleet to the desired configuration. Moreover, the collision avoidance between the members is also considered in the process from initial configuration to pre-assembly configuration. Finally, a numerical example is presented to verify the feasibility of proposed mission planning and the effectiveness of control law.
NASA Technical Reports Server (NTRS)
Patton, Jeff A.
1986-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C)/Electrical Power Generation (EPG) hardware. The EPD and C/EPG hardware is required for performing critical functions of cryogenic reactant storage, electrical power generation and product water distribution in the Orbiter. Specifically, the EPD and C/EPG hardware consists of the following components: Power Section Assembly (PSA); Reactant Control Subsystem (RCS); Thermal Control Subsystem (TCS); Water Removal Subsystem (WRS); and Power Reactant Storage and Distribution System (PRSDS). The IOA analysis process utilized available EPD and C/EPG hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
Application Of Holography In The Distribution Measurement Of Fuel Spraying Field In Diesel Engines
NASA Astrophysics Data System (ADS)
Xiang, He Wan; Xiong, Li Zhi
1988-01-01
The distribution of fuel spraying field in the combustion chamber is an important factor which influences the performance of diesel engines. Precise data for those major parameters of the spraying field distribution are difficult to obtain using conventional ways of measurement, so its effects on the combustion process cannot be controlled. The laser holographic measurement is used and many researches have been made on the injecting nozzles used in diesel engines Series 95, 100 and 130. These researches show that clear spraying field hologram can be taken with an "IC Engine Laser Holography System". By rendition and data processing, droplet size, amount and their space distribution in the spraying; the spraying range, cone angle and other dependable data can be obtained. Therefore, the spraying quality of an injecting nozzle can be precisely determined, which provides reliable basis for the improvement of diesel engines' functions.
Controlled decoherence in a quantum Lévy kicked rotator
NASA Astrophysics Data System (ADS)
Schomerus, Henning; Lutz, Eric
2008-06-01
We develop a theory describing the dynamics of quantum kicked rotators (modeling cold atoms in a pulsed optical field) which are subjected to combined amplitude and timing noise generated by a renewal process (acting as an engineered reservoir). For waiting-time distributions of variable exponent (Lévy noise), we demonstrate the existence of a regime of nonexponential loss of phase coherence. In this regime, the momentum dynamics is subdiffusive, which also manifests itself in a non-Gaussian limiting distribution and a fractional power-law decay of the inverse participation ratio. The purity initially decays with a stretched exponential which is followed by two regimes of power-law decay with different exponents. The averaged logarithm of the fidelity probes the sprinkling distribution of the renewal process. These analytical results are confirmed by numerical computations on quantum kicked rotators subjected to noise events generated by a Yule-Simon distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manière, Charles; Lee, Geuntak; Olevsky, Eugene A.
The stability of the proportional–integral–derivative (PID) control of temperature in the spark plasma sintering (SPS) process is investigated. The PID regulations of this process are tested for different SPS tooling dimensions, physical parameters conditions, and areas of temperature control. It is shown that the PID regulation quality strongly depends on the heating time lag between the area of heat generation and the area of the temperature control. Tooling temperature rate maps are studied to reveal potential areas for highly efficient PID control. The convergence of the model and experiment indicates that even with non-optimal initial PID coefficients, it is possiblemore » to reduce the temperature regulation inaccuracy to less than 4 K by positioning the temperature control location in highly responsive areas revealed by the finite-element calculations of the temperature spatial distribution.« less
Manière, Charles; Lee, Geuntak; Olevsky, Eugene A.
2017-04-21
The stability of the proportional–integral–derivative (PID) control of temperature in the spark plasma sintering (SPS) process is investigated. The PID regulations of this process are tested for different SPS tooling dimensions, physical parameters conditions, and areas of temperature control. It is shown that the PID regulation quality strongly depends on the heating time lag between the area of heat generation and the area of the temperature control. Tooling temperature rate maps are studied to reveal potential areas for highly efficient PID control. The convergence of the model and experiment indicates that even with non-optimal initial PID coefficients, it is possiblemore » to reduce the temperature regulation inaccuracy to less than 4 K by positioning the temperature control location in highly responsive areas revealed by the finite-element calculations of the temperature spatial distribution.« less
Support for User Interfaces for Distributed Systems
NASA Technical Reports Server (NTRS)
Eychaner, Glenn; Niessner, Albert
2005-01-01
An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-01-01
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use. PMID:25046636
Tschentscher, Nadja; Mitchell, Daniel; Duncan, John
2017-05-03
Fluid intelligence has been associated with a distributed cognitive control or multiple-demand (MD) network, comprising regions of lateral frontal, insular, dorsomedial frontal, and parietal cortex. Human fluid intelligence is also intimately linked to task complexity, and the process of solving complex problems in a sequence of simpler, more focused parts. Here, a complex target detection task included multiple independent rules, applied one at a time in successive task epochs. Although only one rule was applied at a time, increasing task complexity (i.e., the number of rules) impaired performance in participants of lower fluid intelligence. Accompanying this loss of performance was reduced response to rule-critical events across the distributed MD network. The results link fluid intelligence and MD function to a process of attentional focus on the successive parts of complex behavior. SIGNIFICANCE STATEMENT Fluid intelligence is intimately linked to the ability to structure complex problems in a sequence of simpler, more focused parts. We examine the basis for this link in the functions of a distributed frontoparietal or multiple-demand (MD) network. With increased task complexity, participants of lower fluid intelligence showed reduced responses to task-critical events. Reduced responses in the MD system were accompanied by impaired behavioral performance. Low fluid intelligence is linked to poor foregrounding of task-critical information across a distributed MD system. Copyright © 2017 Tschentscher et al.
Whiley, Harriet; Keegan, Alexandra; Fallowfield, Howard; Bentham, Richard
2014-07-18
Inhalation of potable water presents a potential route of exposure to opportunistic pathogens and hence warrants significant public health concern. This study used qPCR to detect opportunistic pathogens Legionella spp., L. pneumophila and MAC at multiple points along two potable water distribution pipelines. One used chlorine disinfection and the other chloramine disinfection. Samples were collected four times over the year to provide seasonal variation and the chlorine or chloramine residual was measured during collection. Legionella spp., L. pneumophila and MAC were detected in both distribution systems throughout the year and were all detected at a maximum concentration of 103 copies/mL in the chlorine disinfected system and 106, 103 and 104 copies/mL respectively in the chloramine disinfected system. The concentrations of these opportunistic pathogens were primarily controlled throughout the distribution network through the maintenance of disinfection residuals. At a dead-end and when the disinfection residual was not maintained significant (p < 0.05) increases in concentration were observed when compared to the concentration measured closest to the processing plant in the same pipeline and sampling period. Total coliforms were not present in any water sample collected. This study demonstrates the ability of Legionella spp., L. pneumophila and MAC to survive the potable water disinfection process and highlights the need for greater measures to control these organisms along the distribution pipeline and at point of use.
Sequence and batch language programs and alarm related C Programs for the 242-A MCS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, J.F.
1996-04-15
A Distributive Process Control system was purchased by Project B-534, 242-A Evaporator/Crystallizer Upgrades. This control system, called the Monitor and Control system (MCS), was installed in the 242-A evaporator located in the 200 East Area. The purpose of the MCS is to monitor and control the Evaporator and monitor a number of alarms and other signals from various Tank Farm facilities. Applications software for the MCS was developed by the Waste Treatment Systems Engineering (WTSE) group of Westinghouse. The standard displays and alarm scheme provide for control and monitoring, but do not directly indicate the signal location or depict themore » overall process. To do this, WTSE developed a second alarm scheme.« less
NASA Astrophysics Data System (ADS)
Babey, T.; De Dreuzy, J. R.; Pinheiro, M.; Garnier, P.; Vieublé-Gonod, L.; Rapaport, A.
2015-12-01
Micro-organisms and substrates may be heterogeneously distributed in soils. This repartition as well as transport mechanisms bringing them into contact are expected to impact the biodegradation rates. Pinheiro et al [2015] have measured in cm-large reconstructed soil cores the fate of an injection of 2,4-D pesticide for different injection conditions and initial distributions of soil pesticide degraders. Through the calibration of a reactive transport model of these experiments, we show that: i) biodegradation of diffusion-controlled pesticide fluxes is favored by a high Damköhler number (high reaction rate compared to flux rate); ii) abiotic sorption processes are negligible and do not interact strongly with biodegradation; iii) biodegradation is primarily governed by the initial repartition of pesticide and degraders for diffusion-controlled transport, as diffusion greatly limits the flux of pesticide reaching the microbial hotspot due to dilution. These results suggest that for biodegradation to be substantial, a spatial heterogeneity in the repartition of microbes and substrate has to be associated with intermittent and fast transport processes to mix them.
Lu, Jennifer Q; Yi, Sung Soo
2006-04-25
A monolayer of gold-containing surface micelles has been produced by spin-coating solution micelles formed by the self-assembly of the gold-modified polystyrene-b-poly(2-vinylpyridine) block copolymer in toluene. After oxygen plasma removed the block copolymer template, highly ordered and uniformly sized nanoparticles have been generated. Unlike other published methods that require reduction treatments to form gold nanoparticles in the zero-valent state, these as-synthesized nanoparticles are in form of metallic gold. These gold nanoparticles have been demonstrated to be an excellent catalyst system for growing small-diameter silicon nanowires. The uniformly sized gold nanoparticles have promoted the controllable synthesis of silicon nanowires with a narrow diameter distribution. Because of the ability to form a monolayer of surface micelles with a high degree of order, evenly distributed gold nanoparticles have been produced on a surface. As a result, uniformly distributed, high-density silicon nanowires have been generated. The process described herein is fully compatible with existing semiconductor processing techniques and can be readily integrated into device fabrication.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van de Wiele, Ben; Fin, Samuele; Pancaldi, Matteo
2016-05-28
Various proposals for future magnetic memories, data processing devices, and sensors rely on a precise control of the magnetization ground state and magnetization reversal process in periodically patterned media. In finite dot arrays, such control is hampered by the magnetostatic interactions between the nanomagnets, leading to the non-uniform magnetization state distributions throughout the sample while reversing. In this paper, we evidence how during reversal typical geometric arrangements of dots in an identical magnetization state appear that originate in the dominance of either Global Configurational Anisotropy or Nearest-Neighbor Magnetostatic interactions, which depends on the fields at which the magnetization reversal setsmore » in. Based on our findings, we propose design rules to obtain the uniform magnetization state distributions throughout the array, and also suggest future research directions to achieve non-uniform state distributions of interest, e.g., when aiming at guiding spin wave edge-modes through dot arrays. Our insights are based on the Magneto-Optical Kerr Effect and Magnetic Force Microscopy measurements as well as the extensive micromagnetic simulations.« less
Source of Global Scale Variations in the Midday Vertical Content of Ionospheric Metal Ions
NASA Technical Reports Server (NTRS)
Joiner, J.; Grebowsky, J. M.; Pesnell, W. D.; Aikin, A. C.; Goldberg, Richard A.
1999-01-01
An analysis of long baseline NIMBUS 7 SBUV (Solar Backscatter UV Spectrometer) observations of the latitudinal variation of the noontime vertical Mg' content above approx. 70 km have revealed seasonal, solar activity and magnetic activity dependencies in the Mg+ content. The distributions were categorized in terms of magnetic coordinates partially because transport processes lifting metallic ions from the main meteor ionization layer below 100 km up into the F- region and down again are controlled by electrodynamical processes. Alternatively, the Nimbus Mg+ distributions may simply be a result of ion/neutral chemistry changes resulting from atmospheric changes and not dynamics. In such a case magnetic control would not dominate the distributions. Using in situ satellite measurements of metal ions from the Atmosphere Explorer satellites in the region above the main meteor layer and published sounding rocket measurements of the main metallic ion layers, the effects of the dynamics on the vertical content are delineated. The consequences of atmospheric changes on the vertical content are explored by separating the Nimbus measurements in a geodetic frame of reference.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos... Pennsylvania Ave., NW., Washington, DC 20460, ATTENTION: Asbestos Exemption. For information regarding the...
21 CFR 111.315 - What are the requirements for laboratory control processes?
Code of Federal Regulations, 2010 CFR
2010-04-01
... specifications; (b) Use of sampling plans for obtaining representative samples, in accordance with subpart E of... for distribution rather than for return to the supplier); and (5) Packaged and labeled dietary...
A Critical Review of Options for Tool and Workpiece Sensing
1989-06-02
Tool Temperature Control ." International Machine Tool Design Res., Vol. 7, pp. 465-75, 1967. 5. Cook, N. H., Subramanian, K., and Basile, S. A...if necessury and identify by block riumber) FIELD GROUP SUB-GROUP 1. Detectors 3. Control Equipment 1 08 2. Sensor Characteristics 4. Process Control ...will provide conceptual designs and recommend a system (Continued) 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21 ABSTRACT SECURITY CLASSIFICATION 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George
This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.
Optimum target sizes for a sequential sawing process
H. Dean Claxton
1972-01-01
A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...
Jill A. Hoff; Ned B. Klopfenstein; Jonalea R. Tonn; Geral I. McDonald; Paul J. Zambino; Jack D. Rogers; Tobin L. Peever; Lori M. Carris
2004-01-01
Interactions between fungi and woody roots may be critical factors that influence diverse forest ecosystems processes, such as wood decay (nutrient recycling); root diseases and their biological control; and endophytic, epiphytic, and mycorrhizal symbioses. However, few studies have characterized the diversity and the spatial and temporal distribution of woody root-...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Modeling, simulation, and control of an extraterrestrial oxygen production plant
NASA Technical Reports Server (NTRS)
Schooley, L.; Cellier, F.; Zeigler, B.; Doser, A.; Farrenkopf, G.
1991-01-01
The immediate objective is the development of a new methodology for simulation of process plants used to produce oxygen and/or other useful materials from local planetary resources. Computer communication, artificial intelligence, smart sensors, and distributed control algorithms are being developed and implemented so that the simulation or an actual plant can be controlled from a remote location. The ultimate result of this research will provide the capability for teleoperation of such process plants which may be located on Mars, Luna, an asteroid, or other objects in space. A very useful near-term result will be the creation of an interactive design tool, which can be used to create and optimize the process/plant design and the control strategy. This will also provide a vivid, graphic demonstration mechanism to convey the results of other researchers to the sponsor.
Harness That S.O.B.: Distributing Remote Sensing Analysis in a Small Office/Business
NASA Astrophysics Data System (ADS)
Kramer, J.; Combe, J.; McCord, T. B.
2009-12-01
Researchers in a small office/business (SOB) operate with limited funding, equipment, and software availability. To mitigate these issues, we developed a distributed computing framework that: 1) leverages open source software to implement functionality otherwise reliant on proprietary software and 2) harnesses the unused power of (semi-)idle office computers with mixed operating systems (OSes). This abstract outlines some reasons for the effort, its conceptual basis and implementation, and provides brief speedup results. The Multiple-Endmember Linear Spectral Unmixing Model (MELSUM)1 processes remote-sensing (hyper-)spectral images. The algorithm is computationally expensive, sometimes taking a full week or more for a 1 million pixel/100 wavelength image. Analysis of pixels is independent, so a large benefit can be gained from parallel processing techniques. Job concurrency is limited by the number of active processing units. MELSUM was originally written in the Interactive Data Language (IDL). Despite its multi-threading capabilities, an IDL instance executes on a single machine, and so concurrency is limited by the machine's number of central processing units (CPUs). Network distribution can access more CPUs to provide a greater speedup, while also taking advantage of (often) underutilized extant equipment. appropriately integrating open source software magnifies the impact by avoiding the purchase of additional licenses. Our method of distribution breaks into four conceptual parts: 1) the top- or task-level user interface; 2) a mid-level program that manages hosts and jobs, called the distribution server; 3) a low-level executable for individual pixel calculations; and 4) a control program to synchronize sequential sub-tasks. Each part is a separate OS process, passing information via shell commands and/or temporary files. While the control and low-level executables are short-lived, the top-level program and distribution server run (at least) for the entirety of a task. While any language that supports "spawning" of OS processes can serve as the top-level interface, our solution, d-MELSUM, has been integrated with the IDL code. Doing so extracts the core calculating from IDL, but otherwise preserves IDL features and functionality. The distribution server is an extension of ADE2 mobile robot software, written in Java. Network connections rely on a secure shell (SSH) implementation, whether natively available (e.g., Linux or OS X) or user installed (e.g., OpenSSH available via Cygwin on Windows). Both the low-level and control programs are relatively small C++ programs (~54K, or 1500 lines, total) that were developed in-house, and use GNU's g++ compiler. The low-level code also relies on Linear Algebra PACKage (LAPACK) libraries for pixel calculations. Despite performance being contingent on data size, CPU speed, and network communication rate and latency to some degree, results have generally demonstrated a time reduction of a factor proportional to the number of open connections (one per CPU). For example, the task mentioned above requiring a week to process took 18 hours with d-MELSUM, using 10 CPUs on 2 computers. 1 J.-Ph Combe, et al., PSS 56, 2008. 2 J. Kramer and M. Scheutz, IROS2006, 2006.
A Wireless Sensor Network approach for distributed in-line chemical analysis of water.
Capella, J V; Bonastre, A; Ors, R; Peris, M
2010-03-15
In this work we propose the implementation of a distributed system based on a Wireless Sensor Network for the control of a chemical analysis system for fresh water. This implementation is presented by describing the nodes that form the distributed system, the communication system by wireless networks, control strategies, and so on. Nitrate, ammonium, and chloride are measured in-line using appropriate ion selective electrodes (ISEs), the results obtained being compared with those provided by the corresponding reference methods. Recovery analyses with ISEs and standard methods, study of interferences, and evaluation of major sensor features have also been carried out. The communication among the nodes that form the distributed system is implemented by means of the utilization of proprietary wireless networks, and secondary data transmission services (GSM or GPRS) provided by a mobile telephone operator. The information is processed, integrated and stored in a control center. These data can be retrieved--through the Internet--so as to know the real-time system status and its evolution. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Process for anodizing a robotic device
Townsend, William T [Weston, MA
2011-11-08
A robotic device has a base and at least one finger having at least two links that are connected in series on rotary joints with at least two degrees of freedom. A brushless motor and an associated controller are located at each joint to produce a rotational movement of a link. Wires for electrical power and communication serially connect the controllers in a distributed control network. A network operating controller coordinates the operation of the network, including power distribution. At least one, but more typically two to five, wires interconnect all the controllers through one or more joints. Motor sensors and external world sensors monitor operating parameters of the robotic hand. The electrical signal output of the sensors can be input anywhere on the distributed control network. V-grooves on the robotic hand locate objects precisely and assist in gripping. The hand is sealed, immersible and has electrical connections through the rotary joints for anodizing in a single dunk without masking. In various forms, this intelligent, self-contained, dexterous hand, or combinations of such hands, can perform a wide variety of object gripping and manipulating tasks, as well as locomotion and combinations of locomotion and gripping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Ankur; Kushner, Mark J.; Iowa State University, Department of Electrical and Computer Engineering, 104 Marston Hall, Ames, Iowa 50011-2151
2005-09-15
The distributions of ion energies incident on the wafer significantly influence feature profiles and selectivity during plasma etching. Control of ion energies is typically obtained by varying the amplitude or frequency of a radio frequency sinusoidal bias voltage applied to the substrate. The resulting ion energy distribution (IED), though, is generally broad. Controlling the width and shape of the IED can potentially improve etch selectivity by distinguishing between threshold energies of surface processes. In this article, control of the IED was computationally investigated by applying a tailored, nonsinusoidal bias waveform to the substrate of an inductively coupled plasma. The waveformmore » we investigated, a quasi-dc negative bias having a short positive pulse each cycle, produced a narrow IED whose width was controllable based on the length of the positive spike and frequency. We found that the selectivity between etching Si and SiO{sub 2} in fluorocarbon plasmas could be controlled by adjusting the width and energy of the IED. Control of the energy of a narrow IED enables etching recipes that transition between speed and selectivity without change of gas mixture.« less
Power System Information Delivering System Based on Distributed Object
NASA Astrophysics Data System (ADS)
Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji
In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
Wu, Zhijin; Liu, Dongmei; Sui, Yunxia
2008-02-01
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.
Workplace exposure to nanoparticles from gas metal arc welding process
NASA Astrophysics Data System (ADS)
Zhang, Meibian; Jian, Le; Bin, Pingfan; Xing, Mingluan; Lou, Jianlin; Cong, Liming; Zou, Hua
2013-11-01
Workplace exposure to nanoparticles from gas metal arc welding (GMAW) process in an automobile manufacturing factory was investigated using a combination of multiple metrics and a comparison with background particles. The number concentration (NC), lung-deposited surface area concentration (SAC), estimated SAC and mass concentration (MC) of nanoparticles produced from the GMAW process were significantly higher than those of background particles before welding ( P < 0.01). A bimodal size distribution by mass for welding particles with two peak values (i.e., 10,000-18,000 and 560-320 nm) and a unimodal size distribution by number with 190.7-nm mode size or 154.9-nm geometric size were observed. Nanoparticles by number comprised 60.7 % of particles, whereas nanoparticles by mass only accounted for 18.2 % of the total particles. The morphology of welding particles was dominated by the formation of chain-like agglomerates of primary particles. The metal composition of these welding particles consisted primarily of Fe, Mn, and Zn. The size distribution, morphology, and elemental compositions of welding particles were significantly different from background particles. Working activities, sampling distances from the source, air velocity, engineering control measures, and background particles in working places had significant influences on concentrations of airborne nanoparticle. In addition, SAC showed a high correlation with NC and a relatively low correlation with MC. These findings indicate that the GMAW process is able to generate significant levels of nanoparticles. It is recommended that a combination of multiple metrics is measured as part of a well-designed sampling strategy for airborne nanoparticles. Key exposure factors, such as particle agglomeration/aggregation, background particles, working activities, temporal and spatial distributions of the particles, air velocity, engineering control measures, should be investigated when measuring workplace exposure to nanoparticles.
Asymptotically suboptimal control of weakly interconnected dynamical systems
NASA Astrophysics Data System (ADS)
Dmitruk, N. M.; Kalinin, A. I.
2016-10-01
Optimal control problems for a group of systems with weak dynamical interconnections between its constituent subsystems are considered. A method for decentralized control is proposed which distributes the control actions between several controllers calculating in real time control inputs only for theirs subsystems based on the solution of the local optimal control problem. The local problem is solved by asymptotic methods that employ the representation of the weak interconnection by a small parameter. Combination of decentralized control and asymptotic methods allows to significantly reduce the dimension of the problems that have to be solved in the course of the control process.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Distributed Visualization Project
NASA Technical Reports Server (NTRS)
Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca
2016-01-01
Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.
P300 event-related potentials in children with dyslexia.
Papagiannopoulou, Eleni A; Lagopoulos, Jim
2017-04-01
To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in (i) P300 latency, globally, but greatest in frontal brain regions and (ii) decreased P300 amplitude confined to the central brain regions (Fig. 1). These findings reflect abnormalities associated with a diminished capacity to process mental workload as well as delayed processing of this information in children with dyslexia. Furthermore, the topographical distribution of these findings suggests a distinct spatial distribution for the observed P300 abnormalities. This information may be useful in future therapeutic or brain stimulation intervention trials.
Tyagi, Himanshu; Kushwaha, Ajay; Kumar, Anshuman; Aslam, Mohammed
2016-12-01
The synthesis of gold nanoparticles using citrate reduction process has been revisited. A simplified room temperature approach to standard Turkevich synthesis is employed to obtain fairly monodisperse gold nanoparticles. The role of initial pH alongside the concentration ratio of reactants is explored for the size control of Au nanoparticles. The particle size distribution has been investigated using UV-vis spectroscopy and transmission electron microscope (TEM). At optimal pH of 5, gold nanoparticles obtained are highly monodisperse and spherical in shape and have narrower size distribution (sharp surface plasmon at 520 nm). For other pH conditions, particles are non-uniform and polydisperse, showing a red-shift in plasmon peak due to aggregation and large particle size distribution. The room temperature approach results in highly stable "colloidal" suspension of gold nanoparticles. The stability test through absorption spectroscopy indicates no sign of aggregation for a month. The rate of reduction of auric ionic species by citrate ions is determined via UV absorbance studies. The size of nanoparticles under various conditions is thus predicted using a theoretical model that incorporates nucleation, growth, and aggregation processes. The faster rate of reduction yields better size distribution for optimized pH and reactant concentrations. The model involves solving population balance equation for continuously evolving particle size distribution by discretization techniques. The particle sizes estimated from the simulations (13 to 25 nm) are close to the experimental ones (10 to 32 nm) and corroborate the similarity of reaction processes at 300 and 373 K (classical Turkevich reaction). Thus, substitution of experimentally measured rate of disappearance of auric ionic species into theoretical model enables us to capture the unusual experimental observations.
Formation Flying With Decentralized Control in Libration Point Orbits
NASA Technical Reports Server (NTRS)
Folta, David; Carpenter, J. Russell; Wagner, Christoph
2000-01-01
A decentralized control framework is investigated for applicability of formation flying control in libration orbits. The decentralized approach, being non-hierarchical, processes only direct measurement data, in parallel with the other spacecraft. Control is accomplished via linearization about a reference libration orbit with standard control using a Linear Quadratic Regulator (LQR) or the GSFC control algorithm. Both are linearized about the current state estimate as with the extended Kalman filter. Based on this preliminary work, the decentralized approach appears to be feasible for upcoming libration missions using distributed spacecraft.
Lévy-Student distributions for halos in accelerator beams.
Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio; Illuminati, Fabrizio
2005-12-01
We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schrödinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonic (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Lévy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Lévy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Lévy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.
Levy-Student distributions for halos in accelerator beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio
2005-12-15
We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schroedinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonicmore » (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Levy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Levy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Levy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.« less
NASA Astrophysics Data System (ADS)
Gibson, Wayne H.; Levesque, Daniel
2000-03-01
This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages. Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market. There is a trend in the manufacturing sector towards total automation using "predictive process control". Real-time verification of process parameters "on-the-run" allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points.
NASA Astrophysics Data System (ADS)
Braenzel, J.; Barriga-Carrasco, M. D.; Morales, R.; Schnürer, M.
2018-05-01
We investigate, both experimentally and theoretically, how the spectral distribution of laser accelerated carbon ions can be filtered by charge exchange processes in a double foil target setup. Carbon ions at multiple charge states with an initially wide kinetic energy spectrum, from 0.1 to 18 MeV, were detected with a remarkably narrow spectral bandwidth after they had passed through an ultrathin and partially ionized foil. With our theoretical calculations, we demonstrate that this process is a consequence of the evolution of the carbon ion charge states in the second foil. We calculated the resulting spectral distribution separately for each ion species by solving the rate equations for electron loss and capture processes within a collisional radiative model. We determine how the efficiency of charge transfer processes can be manipulated by controlling the ionization degree of the transfer matter.
Optical fiber sensors and signal processing for intelligent structure monitoring
NASA Technical Reports Server (NTRS)
Rogowski, Robert; Claus, R. O.; Lindner, D. K.; Thomas, Daniel; Cox, Dave
1988-01-01
The analytic and experimental performance of optical fiber sensors for the control of vibration of large aerospace and other structures are investigated. In particular, model domain optical fiber sensor systems, are being studied due to their apparent potential as distributed, low mass sensors of vibration over appropriate ranges of both low frequency and low amplitude displacements. Progress during the past three months is outlined. Progress since September is divided into work in the areas of experimental hardware development, analytical analysis, control design and sensor development. During the next six months, tests of a prototype closed-loop control system for a beam are planned which will demonstrate the solution of several optical fiber instrumentation device problems, the performance of the control system theory which incorporates the model of the modal domain sensor, and the potential for distributed control which this sensor approach offers.
Research and Development in Very Long Baseline Interferometry (VLBI)
NASA Technical Reports Server (NTRS)
Himwich, William E.
2004-01-01
Contents include the following: 1.Observation coordination. 2. Data acquisition system control software. 3. Station support. 4. Correlation, data processing, and analysis. 5. Data distribution and archiving. 6. Technique improvement and research. 7. Computer support.
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... equivalent measurement of micrograms per 100 cm2. ...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... equivalent measurement of micrograms per 100 cm2. ...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... equivalent measurement of micrograms per 100 cm2. ...
40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for... equivalent measurement of micrograms per 100 cm2. ...
Infrared fiber optic temperature monitoring of biological tissues heated in a microwave oven
NASA Astrophysics Data System (ADS)
Belotserkovsky, Edward; Ashkenasy, Y.; Shenfeld, Ofer; Drizlikh, S.; Zur, Albert; Katzir, Abraham
1993-05-01
The heating of tissue by microwave radiation has attained a place of importance in various medical fields such as the treatment of malignancies, urinary retention and hypothermia. Accurate temperature measurements in these treated tissues is important for treatment planning and for the control of the heating process. It is also important to be able to measure spacial temperature distribution in the tissues because they are heated in a non uniform way by the microwave radiation. Fiber optic radiometry makes possible accurate temperature measurement in the presence of microwave radiation and does not require contact with the tissue. Using a IR silver halide fiber optic radiometric temperature sensor we obtained accurate temperature measurements of tissues heated by microwave, enabling us to control the heating process in all regions of the tissue. We also performed temperature mapping of the heated tissues and demonstrated the non-uniform temperature distributions in them.
Protecting nonlocality of multipartite states by feed-forward control
NASA Astrophysics Data System (ADS)
Li, Xiao-Gang; Zou, Jian; Shao, Bin
2018-06-01
Nonlocality is a useful resource in quantum communication and quantum information processing. In practical quantum communication, multipartite entangled states must be distributed between different users in different places through a channel. However, the channel is usually inevitably disturbed by the environment in quantum state distribution processing and then the nonlocality of states will be weakened and even lost. In this paper, we use a feed-forward control scheme to protect the nonlocality of the Bell and GHZ states against dissipation. We find that this protection scheme is very effective, specifically, for the Bell state, we can increase the noise threshold from 0.5 to 0.98, and for GHZ state from 0.29 to 0.96. And we also find that entanglement is relatively easier to be protected than nonlocality. For our scheme, protecting entanglement is equivalent to protecting the state in the case of Bell state, while protecting nonlocality is not.
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is lifted from its container in Kennedy Space Center’s Space Station Processing Facility (SSPF) before it is moved into its workstand, where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
1998-01-14
Workers in Kennedy Space Center’s Space Station Processing Facility (SSPF) observe the Photovoltaic Module 1 Integrated Equipment Assembly (IEA) as it moves past them on its way to its workstand, where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is moved past a Pressurized Mating Adapter in Kennedy Space Center’s Space Station Processing Facility (SSPF) toward the workstand where it will be processed for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the International Space Station. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
Distributed optical signal processing for microwave photonics subsystems.
Chew, Suen Xin; Nguyen, Linh; Yi, Xiaoke; Song, Shijie; Li, Liwei; Bian, Pengju; Minasian, Robert
2016-03-07
We propose and experimentally demonstrate a novel and practical microwave photonic system that is capable of executing cascaded signal processing functions comprising a microwave photonic bandpass filter and a phase shifter, while providing separate and independent control for each function. The experimental results demonstrate a single bandpass microwave photonic filter with a 3-dB bandwidth of 15 MHz and an out-of-band ratio of over 40 dB, together with a simultaneous RF phase tuning control of 0-215° with less than ± 3 dB filter shape variance.
Automatic pattern localization across layout database and photolithography mask
NASA Astrophysics Data System (ADS)
Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter
2016-03-01
Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.
A new approach to process control using Instability Index
NASA Astrophysics Data System (ADS)
Weintraub, Jeffrey; Warrick, Scott
2016-03-01
The merits of a robust Statistical Process Control (SPC) methodology have long been established. In response to the numerous SPC rule combinations, processes, and the high cost of containment, the Instability Index (ISTAB) is presented as a tool for managing these complexities. ISTAB focuses limited resources on key issues and provides a window into the stability of manufacturing operations. ISTAB takes advantage of the statistical nature of processes by comparing the observed average run length (OARL) to the expected run length (ARL), resulting in a gap value called the ISTAB index. The ISTAB index has three characteristic behaviors that are indicative of defects in an SPC instance. Case 1: The observed average run length is excessively long relative to expectation. ISTAB > 0 is indicating the possibility that the limits are too wide. Case 2: The observed average run length is consistent with expectation. ISTAB near zero is indicating that the process is stable. Case 3: The observed average run length is inordinately short relative to expectation. ISTAB < 0 is indicating that the limits are too tight, the process is unstable or both. The probability distribution of run length is the basis for establishing an ARL. We demonstrate that the geometric distribution is a good approximation to run length across a wide variety of rule sets. Excessively long run lengths are associated with one kind of defect in an SPC instance; inordinately short run lengths are associated with another. A sampling distribution is introduced as a way to quantify excessively long and inordinately short observed run lengths. This paper provides detailed guidance for action limits on these run lengths. ISTAB as a statistical method of review facilitates automated instability detection. This paper proposes a management system based on ISTAB as an enhancement to more traditional SPC approaches.
Monitoring system and methods for a distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A monitoring system and methods are provided for a distributed and recoverable digital control system. The monitoring system generally comprises two independent monitoring planes within the control system. The first monitoring plane is internal to the computing units in the control system, and the second monitoring plane is external to the computing units. The internal first monitoring plane includes two in-line monitors. The first internal monitor is a self-checking, lock-step-processing monitor with integrated rapid recovery capability. The second internal monitor includes one or more reasonableness monitors, which compare actual effector position with commanded effector position. The external second monitor plane includes two monitors. The first external monitor includes a pre-recovery computing monitor, and the second external monitor includes a post recovery computing monitor. Various methods for implementing the monitoring functions are also disclosed.
Pareto-Zipf law in growing systems with multiplicative interactions
NASA Astrophysics Data System (ADS)
Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi
2018-06-01
Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.
Confessions of a robot lobotomist
NASA Technical Reports Server (NTRS)
Gottshall, R. Marc
1994-01-01
Since its inception, numerically controlled (NC) machining methods have been used throughout the aerospace industry to mill, drill, and turn complex shapes by sequentially stepping through motion programs. However, the recent demand for more precision, faster feeds, exotic sensors, and branching execution have existing computer numerical control (CNC) and distributed numerical control (DNC) systems running at maximum controller capacity. Typical disadvantages of current CNC's include fixed memory capacities, limited communication ports, and the use of multiple control languages. The need to tailor CNC's to meet specific applications, whether it be expanded memory, additional communications, or integrated vision, often requires replacing the original controller supplied with the commercial machine tool with a more powerful and capable system. This paper briefly describes the process and equipment requirements for new controllers and their evolutionary implementation in an aerospace environment. The process of controller retrofit with currently available machines is examined, along with several case studies and their computational and architectural implications.
An access control model with high security for distributed workflow and real-time application
NASA Astrophysics Data System (ADS)
Han, Ruo-Fei; Wang, Hou-Xiang
2007-11-01
The traditional mandatory access control policy (MAC) is regarded as a policy with strict regulation and poor flexibility. The security policy of MAC is so compelling that few information systems would adopt it at the cost of facility, except some particular cases with high security requirement as military or government application. However, with the increasing requirement for flexibility, even some access control systems in military application have switched to role-based access control (RBAC) which is well known as flexible. Though RBAC can meet the demands for flexibility but it is weak in dynamic authorization and consequently can not fit well in the workflow management systems. The task-role-based access control (T-RBAC) is then introduced to solve the problem. It combines both the advantages of RBAC and task-based access control (TBAC) which uses task to manage permissions dynamically. To satisfy the requirement of system which is distributed, well defined with workflow process and critically for time accuracy, this paper will analyze the spirit of MAC, introduce it into the improved T&RBAC model which is based on T-RBAC. At last, a conceptual task-role-based access control model with high security for distributed workflow and real-time application (A_T&RBAC) is built, and its performance is simply analyzed.
A structurally based analytic model for estimation of biomass and fuel loads of woodland trees
Robin J. Tausch
2009-01-01
Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...
CPU and GPU-based Numerical Simulations of Combustion Processes
2012-04-27
Distribution unlimited UCLA MAE Research and Technology Review April 27, 2012 Magnetohydrodynamic Augmentation of the Pulse Detonation Rocket Engines...Pulse Detonation Rocket-Induced MHD Ejector (PDRIME) – Energy extract from exhaust flow by MHD generator – Seeded air stream acceleration by MHD...accelerator for thrust enhancement and control • Alternative concept: Magnetic piston – During PDE blowdown process, MHD extracts energy and
Water mist injection in oil shale retorting
Galloway, T.R.; Lyczkowski, R.W.; Burnham, A.K.
1980-07-30
Water mist is utilized to control the maximum temperature in an oil shale retort during processing. A mist of water droplets is generated and entrained in the combustion supporting gas flowing into the retort in order to distribute the liquid water droplets throughout the retort. The water droplets are vaporized in the retort in order to provide an efficient coolant for temperature control.
UMCS feasibility study for Fort George G. Meade. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
Fort George G. Meade selected eighty-three (83) buildings, from the approximately 1,500 buildings on the base to be included in the UMCS Feasibility Study. The purpose of the study is to evaluate the feasibility of replacing the existing analog based Energy Monitoring and Control System (EMCS) with a new distributed process Monitoring and Control System (UMCS).
UMCS feasibility study for Fort George G. Meade volume 1. Feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
Fort George G. Meade selected 83 buildings, from the approximately 1,500 buildings on the base to be included in the UMCS Feasibility Study. The purpose of the study is to evaluate the feasibility of replacing the existing analog based Energy Monitoring and Control System (EMCS) with a new distributed process Monitoring and Control System (UMCS).
ERIC Educational Resources Information Center
Harjunen, Elina
2012-01-01
In this theoretical paper the role of power in classroom interactions is examined in terms of a dominance continuum to advance a theoretical framework justifying the emergence of three ways of distributing power when it comes to dealing with the control over the teaching-studying-learning (TSL) "pattern of teacher domination," "pattern of…
Monitoring copper release in drinking water distribution systems.
d'Antonio, L; Fabbricino, M; Panico, A
2008-01-01
A new procedure, recently proposed for on-line monitoring of copper released from metal pipes in household plumbing system for drinking water distribution during the development of corrosion processes, is tested experimentally. Experiments were carried out in laboratory controlled conditions, using synthetic water and varying the water alkalinity. The possibility of using the corrosion potential as a surrogate measure of copper concentration in stagnating water is shown, verifying, in the meantime, the effect of alkalinity on the development of passivation phenomena, which tend to protect the pipe from corrosion processes. Experimental data are discussed, highlighting the potentiality of the procedure, and recognizing its limitations. Copyright IWA Publishing 2008.
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom
2014-01-01
This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.
2003-08-09
VANDENBERG AIR FORCE BASE, CALIF. - Workers mate the Pegasus , with its cargo of the SciSat-1 payload to the L-1011 carrier aircraft. The SciSat-1 weighs approximately 330 pounds and after launch will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-07-29
VANDENBERG AIR FORCE BASE, CALIF. - At Vandenberg AFB, Calif., a solar array is tested before installing on the SciSat-1 spacecraft. The SciSat-1 weighs approximately 330 pounds and after launch will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-07-29
VANDENBERG AIR FORCE BASE, CALIF. - At Vandenberg AFB, Calif., a solar array is installed on the SciSat-1 spacecraft. The SciSat-1 weighs approximately 330 pounds and after launch will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
2003-08-09
VANDENBERG AIR FORCE BASE, CALIF. - The SciSat-1 payload and Pegasus launch vehicle are lifted and mated to the L-1011 carrier aircraft. The SciSat-1 weighs approximately 330 pounds and after launch will be placed in a 400-mile-high polar orbit to investigate processes that control the distribution of ozone in the upper atmosphere. The data from the satellite will provide Canadian and international scientists with improved measurements relating to global ozone processes and help policymakers assess existing environmental policy and develop protective measures for improving the health of our atmosphere, preventing further ozone depletion. The mission is designed to last two years.
Shahar, Nitzan; Meiran, Nachshon
2015-01-01
Few studies have addressed action control training. In the current study, participants were trained over 19 days in an adaptive training task that demanded constant switching, maintenance and updating of novel action rules. Participants completed an executive functions battery before and after training that estimated processing speed, working memory updating, set-shifting, response inhibition and fluid intelligence. Participants in the training group showed greater improvement than a no-contact control group in processing speed, indicated by reduced reaction times in speeded classification tasks. No other systematic group differences were found across the different pre-post measurements. Ex-Gaussian fitting of the reaction-time distribution revealed that the reaction time reduction observed among trained participants was restricted to the right tail of the distribution, previously shown to be related to working memory. Furthermore, training effects were only found in classification tasks that required participants to maintain novel stimulus-response rules in mind, supporting the notion that the training improved working memory abilities. Training benefits were maintained in a 10-month follow-up, indicating relatively long-lasting effects. The authors conclude that training improved action-related working memory abilities. PMID:25799443
Rouse, Adam G.
2016-01-01
Reaching and grasping typically are considered to be spatially separate processes that proceed concurrently in the arm and the hand, respectively. The proximal representation in the primary motor cortex (M1) controls the arm for reaching, while the distal representation controls the hand for grasping. Many studies of M1 activity therefore have focused either on reaching to various locations without grasping different objects, or else on grasping different objects all at the same location. Here, we recorded M1 neurons in the anterior bank and lip of the central sulcus as monkeys performed more naturalistic movements, reaching toward, grasping, and manipulating four different objects in up to eight different locations. We quantified the extent to which variation in firing rates depended on location, on object, and on their interaction—all as a function of time. Activity proceeded largely in two sequential phases: the first related predominantly to the location to which the upper extremity reached, and the second related to the object about to be grasped. Both phases involved activity distributed widely throughout the sampled territory, spanning both the proximal and the distal upper extremity representation in caudal M1. Our findings indicate that naturalistic reaching and grasping, rather than being spatially segregated processes that proceed concurrently, each are spatially distributed processes controlled by caudal M1 in large part sequentially. Rather than neuromuscular processes separated in space but not time, reaching and grasping are separated more in time than in space. SIGNIFICANCE STATEMENT Reaching and grasping typically are viewed as processes that proceed concurrently in the arm and hand, respectively. The arm region in the primary motor cortex (M1) is assumed to control reaching, while the hand region controls grasping. During naturalistic reach–grasp–manipulate movements, we found, however, that neuron activity proceeds largely in two sequential phases, each spanning both arm and hand representations in M1. The first phase is related predominantly to the reach location, and the second is related to the object about to be grasped. Our findings indicate that reaching and grasping are successive aspects of a single movement. Initially the arm and the hand both are projected toward the object's location, and later both are shaped to grasp and manipulate. PMID:27733614
Stein, Timo; Hebart, Martin N.; Sterzer, Philipp
2011-01-01
Until recently, it has been thought that under interocular suppression high-level visual processing is strongly inhibited if not abolished. With the development of continuous flash suppression (CFS), a variant of binocular rivalry, this notion has now been challenged by a number of reports showing that even high-level aspects of visual stimuli, such as familiarity, affect the time stimuli need to overcome CFS and emerge into awareness. In this “breaking continuous flash suppression” (b-CFS) paradigm, differential unconscious processing during suppression is inferred when (a) speeded detection responses to initially invisible stimuli differ, and (b) no comparable differences are found in non-rivalrous control conditions supposed to measure non-specific threshold differences between stimuli. The aim of the present study was to critically evaluate these assumptions. In six experiments we compared the detection of upright and inverted faces. We found that not only under CFS, but also in control conditions upright faces were detected faster and more accurately than inverted faces, although the effect was larger during CFS. However, reaction time (RT) distributions indicated critical differences between the CFS and the control condition. When RT distributions were matched, similar effect sizes were obtained in both conditions. Moreover, subjective ratings revealed that CFS and control conditions are not perceptually comparable. These findings cast doubt on the usefulness of non-rivalrous control conditions to rule out non-specific threshold differences as a cause of shorter detection latencies during CFS. Thus, at least in its present form, the b-CFS paradigm cannot provide unequivocal evidence for unconscious processing under interocular suppression. Nevertheless, our findings also demonstrate that the b-CFS paradigm can be fruitfully applied as a highly sensitive device to probe differences between stimuli in their potency to gain access to awareness. PMID:22194718
Security Implications of OPC, OLE, DCOM, and RPC in Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2006-01-01
OPC is a collection of software programming standards and interfaces used in the process control industry. It is intended to provide open connectivity and vendor equipment interoperability. The use of OPC technology simplifies the development of control systems that integrate components from multiple vendors and support multiple control protocols. OPC-compliant products are available from most control system vendors, and are widely used in the process control industry. OPC was originally known as OLE for Process Control; the first standards for OPC were based on underlying services in the Microsoft Windows computing environment. These underlying services (OLE [Object Linking and Embedding],more » DCOM [Distributed Component Object Model], and RPC [Remote Procedure Call]) have been the source of many severe security vulnerabilities. It is not feasible to automatically apply vendor patches and service packs to mitigate these vulnerabilities in a control systems environment. Control systems using the original OPC data access technology can thus inherit the vulnerabilities associated with these services. Current OPC standardization efforts are moving away from the original focus on Microsoft protocols, with a distinct trend toward web-based protocols that are independent of any particular operating system. However, the installed base of OPC equipment consists mainly of legacy implementations of the OLE for Process Control protocols.« less
Instrumentation complex for Langley Research Center's National Transonic Facility
NASA Technical Reports Server (NTRS)
Russell, C. H.; Bryant, C. S.
1977-01-01
The instrumentation discussed in the present paper was developed to ensure reliable operation for a 2.5-meter cryogenic high-Reynolds-number fan-driven transonic wind tunnel. It will incorporate four CPU's and associated analog and digital input/output equipment, necessary for acquiring research data, controlling the tunnel parameters, and monitoring the process conditions. Connected in a multipoint distributed network, the CPU's will support data base management and processing; research measurement data acquisition and display; process monitoring; and communication control. The design will allow essential processes to continue, in the case of major hardware failures, by switching input/output equipment to alternate CPU's and by eliminating nonessential functions. It will also permit software modularization by CPU activity and thereby reduce complexity and development time.
Method for controlling boiling point distribution of coal liquefaction oil product
Anderson, R.P.; Schmalzer, D.K.; Wright, C.H.
1982-12-21
The relative ratio of heavy distillate to light distillate produced in a coal liquefaction process is continuously controlled by automatically and continuously controlling the ratio of heavy distillate to light distillate in a liquid solvent used to form the feed slurry to the coal liquefaction zone, and varying the weight ratio of heavy distillate to light distillate in the liquid solvent inversely with respect to the desired weight ratio of heavy distillate to light distillate in the distillate fuel oil product. The concentration of light distillate and heavy distillate in the liquid solvent is controlled by recycling predetermined amounts of light distillate and heavy distillate for admixture with feed coal to the process in accordance with the foregoing relationships. 3 figs.
Method for controlling boiling point distribution of coal liquefaction oil product
Anderson, Raymond P.; Schmalzer, David K.; Wright, Charles H.
1982-12-21
The relative ratio of heavy distillate to light distillate produced in a coal liquefaction process is continuously controlled by automatically and continuously controlling the ratio of heavy distillate to light distillate in a liquid solvent used to form the feed slurry to the coal liquefaction zone, and varying the weight ratio of heavy distillate to light distillate in the liquid solvent inversely with respect to the desired weight ratio of heavy distillate to light distillate in the distillate fuel oil product. The concentration of light distillate and heavy distillate in the liquid solvent is controlled by recycling predetermined amounts of light distillate and heavy distillate for admixture with feed coal to the process in accordance with the foregoing relationships.
40 CFR 763.179 - Confidential business information claims.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.179 Confidential... asbestos on human health and the environment? If your answer is yes, explain. ...
40 CFR 761.363 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Applicability. 761.363 Section 761.363 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Double...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Background. 761.360 Section 761.360 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Double...
40 CFR 761.320 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Applicability. 761.320 Section 761.320 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Self...
Replication in Mobile Environments
2007-12-01
control number. 1. REPORT DATE 01 DEC 2007 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Data Replication Over Disadvantaged ...Communication, Information Processing, and Ergonomics KIE What is the Problem? Data replication among distributed databases occurring over disadvantaged
43 CFR 2310.3-2 - Development and processing of the case file for submission to the Secretary.
Code of Federal Regulations, 2011 CFR
2011-10-01
... control, appropriation, use and distribution of water, or whether the withdrawal is intended to reserve... the identification of cultural resources prepared in accordance with the requirements of 36 CFR part...
Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián
2009-01-01
Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results. PMID:22389597
Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián
2009-01-01
Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results.
Real-time feedback control of twin-screw wet granulation based on image analysis.
Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György
2018-06-04
The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bennington, Donald R. (Inventor); Crawford, Daniel J. (Inventor)
1990-01-01
The invention is a clock for synchronizing operations within a high-speed, distributed data processing network. The clock is actually a distributed system comprising a central clock and multiple site clock interface units (SCIUs) which are connected by means of a fiber optic star network and which operate under control of separate clock software. The presently preferred embodiment is a part of the flight simulation system now in current use at the NASA Langley Research Center.
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlov, D. A.; Bidus, N. V.; Bobrov, A. I., E-mail: bobrov@phys.unn.ru
2015-01-15
The distribution of elastic strains in a system consisting of a quantum-dot layer and a buried GaAs{sub x}P{sub 1−x} layer is studied using geometric phase analysis. A hypothesis is offered concerning the possibility of controlling the process of the formation of InAs quantum dots in a GaAs matrix using a local isovalent phosphorus impurity.
Modelling and analysis of solar cell efficiency distributions
NASA Astrophysics Data System (ADS)
Wasmer, Sven; Greulich, Johannes
2017-08-01
We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.
Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar
2017-02-01
A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.
NASA Astrophysics Data System (ADS)
Hu, Dawei; Li, Leyuan; Liu, Hui; Zhang, Houkai; Fu, Yuming; Sun, Yi; Li, Liang
It is necessary to process inedible plant biomass into soil-like substrate (SLS) by bio-compost to realize biological resource sustainable utilization. Although similar to natural soil in structure and function, SLS often has uneven water distribution adversely affecting the plant growth due to unsatisfactory porosity, permeability and gravity distribution. In this article, SLS plant-growing facility (SLS-PGF) were therefore rotated properly for cultivating lettuce, and the Brinkman equations coupled with laminar flow equations were taken as governing equations, and boundary conditions were specified by actual operating characteristics of rotating SLS-PGF. Optimal open-control law of the angular and inflow velocity was determined by lettuce water requirement and CFD simulations. The experimental result clearly showed that water content was more uniformly distributed in SLS under the action of centrifugal and Coriolis force, rotating SLS-PGF with the optimal open-control law could meet lettuce water requirement at every growth stage and achieve precise irrigation.
Thomas, Brian F.; Pollard, Gerald T.
2016-01-01
Cannabis is classified as a schedule I controlled substance by the US Drug Enforcement Agency, meaning that it has no medicinal value. Production is legally restricted to a single supplier at the University of Mississippi, and distribution to researchers is tightly controlled. However, a majority of the population is estimated to believe that cannabis has legitimate medical or recreational value, numerous states have legalized or decriminalized possession to some degree, and the federal government does not strictly enforce its law and is considering rescheduling. The explosive increase in open sale and use of herbal cannabis and its products has occurred with widely variable and in many cases grossly inadequate quality control at all levels—growing, processing, storage, distribution, and use. This paper discusses elements of the analytical and regulatory system that need to be put in place to ensure standardization for the researcher and to reduce the hazards of contamination, overdosing, and underdosing for the end-user. PMID:27630566
NASA Astrophysics Data System (ADS)
Tice, Michael M.
2009-12-01
All mats are preserved in the shallowest-water interval of those rocks deposited below normal wave base and above storm wave base. This interval is bounded below by a transgressive lag formed during regional flooding and above by a small condensed section that marks a local relative sea-level maximum. Restriction of all mat morphotypes to the shallowest interval of the storm-active layer in the BRC ocean reinforces previous interpretations that these mats were constructed primarily by photosynthetic organisms. Morphotypes α and β dominate the lower half of this interval and grew during deposition of relatively coarse detrital carbonaceous grains, while morphotype γ dominates the upper half and grew during deposition of fine detrital carbonaceous grains. The observed mat distribution suggests that either light intensity or, more likely, small variations in ambient current energy acted as a first-order control on mat morphotype distribution. These results demonstrate significant environmental control on biological morphogenetic processes independent of influences from siliciclastic sedimentation.
NASA Astrophysics Data System (ADS)
Boudreault, E.; Hazel, B.; Côté, J.; Godin, S.
2014-03-01
A new robotic heat treatment process is developed. Using this solution it is now possible to perform local heat treatment on large steel components. Crack, cavitation and erosion repairs on turbine blades and Pelton buckets are among the applications of this technique. The proof of concept is made on a 13Cr-4Ni stainless steel designated "CA6NM". This alloy is widely used in the power industry for modern system components. Given the very tight temperature tolerance (600 to 630 °C) for post-weld heat treatment on this alloy, 13Cr-4Ni stainless steel is very well suited for demonstrating the possibilities of this process. To achieve heat treatment requirements, an induction heating system is mounted on a compact manipulator named "Scompi". This robot moves a pancake coil in order to control the temperature distribution. A simulator using thermal finite element analysis is first used for path planning. A feedback loop adjusts parameters in function of environmental conditions.
A Distributed Control System Prototyping Environment to Support Control Room Modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lew, Roger Thomas; Boring, Ronald Laurids; Ulrich, Thomas Anthony
Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, however the set of tools for developing and designing HMIs is still in its infancy. Here we propose a rapid prototyping approach for integrating proposed HMIs into their native environments before a design is finalized. This approach allows researchers and developers tomore » test design ideas and eliminate design flaws prior to fully developing the new system. We illustrate this approach with four prototype designs developed using Microsoft’s Windows Presentation Foundation (WPF). One example is integrated into a microworld environment to test the functionality of the design and identify the optimal level of automation for a new system in a nuclear power plant. The other three examples are integrated into a full-scale, glasstop digital simulator of a nuclear power plant. One example demonstrates the capabilities of next generation control concepts; another aims to expand the current state of the art; lastly, an HMI prototype was developed as a test platform for a new control system currently in development at U.S. nuclear power plants. WPF possesses several characteristics that make it well suited to HMI design. It provides a tremendous amount of flexibility, agility, robustness, and extensibility. Distributed control system (DCS) specific environments tend to focus on the safety and reliability requirements for real-world interfaces and consequently have less emphasis on providing functionality to support novel interaction paradigms. Because of WPF’s large user-base, Microsoft can provide an extremely mature tool. Within process control applications,WPF is platform independent and can communicate with popular full-scope process control simulator vendor plant models and DCS platforms.« less
An architecture for object-oriented intelligent control of power systems in space
NASA Technical Reports Server (NTRS)
Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.
1993-01-01
A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
Gomez-Pilar, Javier; Poza, Jesús; Bachiller, Alejandro; Gómez, Carlos; Núñez, Pablo; Lubeiro, Alba; Molina, Vicente; Hornero, Roberto
2018-02-01
The aim of this study was to introduce a novel global measure of graph complexity: Shannon graph complexity (SGC). This measure was specifically developed for weighted graphs, but it can also be applied to binary graphs. The proposed complexity measure was designed to capture the interplay between two properties of a system: the 'information' (calculated by means of Shannon entropy) and the 'order' of the system (estimated by means of a disequilibrium measure). SGC is based on the concept that complex graphs should maintain an equilibrium between the aforementioned two properties, which can be measured by means of the edge weight distribution. In this study, SGC was assessed using four synthetic graph datasets and a real dataset, formed by electroencephalographic (EEG) recordings from controls and schizophrenia patients. SGC was compared with graph density (GD), a classical measure used to evaluate graph complexity. Our results showed that SGC is invariant with respect to GD and independent of node degree distribution. Furthermore, its variation with graph size [Formula: see text] is close to zero for [Formula: see text]. Results from the real dataset showed an increment in the weight distribution balance during the cognitive processing for both controls and schizophrenia patients, although these changes are more relevant for controls. Our findings revealed that SGC does not need a comparison with null-hypothesis networks constructed by a surrogate process. In addition, SGC results on the real dataset suggest that schizophrenia is associated with a deficit in the brain dynamic reorganization related to secondary pathways of the brain network.
A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project
NASA Astrophysics Data System (ADS)
Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello
This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.
Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA
2008-10-14
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.
Integration of domain and resource-based reasoning for real-time control in dynamic environments
NASA Technical Reports Server (NTRS)
Morgan, Keith; Whitebread, Kenneth R.; Kendus, Michael; Cromarty, Andrew S.
1993-01-01
A real-time software controller that successfully integrates domain-based and resource-based control reasoning to perform task execution in a dynamically changing environment is described. The design of the controller is based on the concept of partitioning the process to be controlled into a set of tasks, each of which achieves some process goal. It is assumed that, in general, there are multiple ways (tasks) to achieve a goal. The controller dynamically determines current goals and their current criticality, choosing and scheduling tasks to achieve those goals in the time available. It incorporates rule-based goal reasoning, a TMS-based criticality propagation mechanism, and a real-time scheduler. The controller has been used to build a knowledge-based situation assessment system that formed a major component of a real-time, distributed, cooperative problem solving system built under DARPA contract. It is also being employed in other applications now in progress.
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
2017-10-21
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Cai, Weidong; Leung, Hoi-Chung
2011-01-01
Background The human inferior frontal cortex (IFC) is a large heterogeneous structure with distinct cytoarchitectonic subdivisions and fiber connections. It has been found involved in a wide range of executive control processes from target detection, rule retrieval to response control. Since these processes are often being studied separately, the functional organization of executive control processes within the IFC remains unclear. Methodology/Principal Findings We conducted an fMRI study to examine the activities of the subdivisions of IFC during the presentation of a task cue (rule retrieval) and during the performance of a stop-signal task (requiring response generation and inhibition) in comparison to a not-stop task (requiring response generation but not inhibition). We utilized a mixed event-related and block design to separate brain activity in correspondence to transient control processes from rule-related and sustained control processes. We found differentiation in control processes within the IFC. Our findings reveal that the bilateral ventral-posterior IFC/anterior insula are more active on both successful and unsuccessful stop trials relative to not-stop trials, suggesting their potential role in the early stage of stopping such as triggering the stop process. Direct countermanding seems to be outside of the IFC. In contrast, the dorsal-posterior IFC/inferior frontal junction (IFJ) showed transient activity in correspondence to the infrequent presentation of the stop signal in both tasks and the left anterior IFC showed differential activity in response to the task cues. The IFC subdivisions also exhibited similar but distinct patterns of functional connectivity during response control. Conclusions/Significance Our findings suggest that executive control processes are distributed across the IFC and that the different subdivisions of IFC may support different control operations through parallel cortico-cortical and cortico-striatal circuits. PMID:21673969
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jianhui; Lu, Xiaonan; Martino, Sal
Many distribution management systems (DMS) projects have achieved limited success because the electric utility did not sufficiently plan for actual use of the DMS functions in the control room environment. As a result, end users were not clear on how to use the new application software in actual production environments with existing, well-established business processes. An important first step in the DMS implementation process is development and refinement of the “to be” business processes. Development of use cases for the required DMS application functions is a key activity that leads to the formulation of the “to be” requirements. It ismore » also an important activity that is needed to develop specifications that are used to procure a new DMS.« less
``Sweetening'' Technical Physics with Hershey's Kisses
NASA Astrophysics Data System (ADS)
Stone, Chuck
2003-04-01
This paper describes an activity in which students measure the mass of each candy in one full bag of Hershey's Kisses and then use a simple spreadsheet program to construct a histogram showing the number of candies as a function of mass. Student measurements indicate that one single bag of 80 Kisses yields enough data to produce a noticeable variation in the candy's mass distribution. The bimodal character of this distribution provides a useful discussion topic. This activity can be performed as a classroom project, a laboratory exercise, or an interactive lecture demonstration. In all these formats, students have the opportunity to collect, organize, process, and analyze real data. In addition to strengthening graphical analysis skills, this activity introduces students to fundamentals of statistics, manufacturing processes in the industrial workplace, and process control techniques.
Evaluation of liquid aerosol transport through porous media
NASA Astrophysics Data System (ADS)
Hall, R.; Murdoch, L.; Falta, R.; Looney, B.; Riha, B.
2016-07-01
Application of remediation methods in contaminated vadose zones has been hindered by an inability to effectively distribute liquid- or solid-phase amendments. Injection as aerosols in a carrier gas could be a viable method for achieving useful distributions of amendments in unsaturated materials. The objectives of this work were to characterize radial transport of aerosols in unsaturated porous media, and to develop capabilities for predicting results of aerosol injection scenarios at the field-scale. Transport processes were investigated by conducting lab-scale injection experiments with radial flow geometry, and predictive capabilities were obtained by developing and validating a numerical model for simulating coupled aerosol transport, deposition, and multi-phase flow in porous media. Soybean oil was transported more than 2 m through sand by injecting it as micron-scale aerosol droplets. Oil saturation in the sand increased with time to a maximum of 0.25, and decreased with radial distance in the experiments. The numerical analysis predicted the distribution of oil saturation with only minor calibration. The results indicated that evolution of oil saturation was controlled by aerosol deposition and subsequent flow of the liquid oil, and simulation requires including these two coupled processes. The calibrated model was used to evaluate field applications. The results suggest that amendments can be delivered to the vadose zone as aerosols, and that gas injection rate and aerosol particle size will be important controls on the process.
Structural covariance and cortical reorganisation in schizophrenia: a MRI-based morphometric study.
Palaniyappan, Lena; Hodgson, Olha; Balain, Vijender; Iwabuchi, Sarina; Gowland, Penny; Liddle, Peter
2018-05-06
In patients with schizophrenia, distributed abnormalities are observed in grey matter volume. A recent hypothesis posits that these distributed changes are indicative of a plastic reorganisation process occurring in response to a functional defect in neuronal information transmission. We investigated the structural covariance across various brain regions in early-stage schizophrenia to determine if indeed the observed patterns of volumetric loss conform to a coordinated pattern of structural reorganisation. Structural magnetic resonance imaging scans were obtained from 40 healthy adults and 41 age, gender and parental socioeconomic status matched patients with schizophrenia. Volumes of grey matter tissue were estimated at the regional level across 90 atlas-based parcellations. Group-level structural covariance was studied using a graph theoretical framework. Patients had distributed reduction in grey matter volume, with high degree of localised covariance (clustering) compared with controls. Patients with schizophrenia had reduced centrality of anterior cingulate and insula but increased centrality of the fusiform cortex, compared with controls. Simulating targeted removal of highly central nodes resulted in significant loss of the overall covariance patterns in patients compared with controls. Regional volumetric deficits in schizophrenia are not a result of random, mutually independent processes. Our observations support the occurrence of a spatially interconnected reorganisation with the systematic de-escalation of conventional 'hub' regions. This raises the question of whether the morphological architecture in schizophrenia is primed for compensatory functions, albeit with a high risk of inefficiency.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
NASA Astrophysics Data System (ADS)
Mohamed, Ahmed
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system's dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
40 CFR 763.165 - Manufacture and importation prohibitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Manufacture and importation...) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.165 Manufacture...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Inspections. 763.176 Section 763.176 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos...
40 CFR 763.178 - Recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Recordkeeping. 763.178 Section 763.178 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos...
40 CFR 761.93 - Import for disposal.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Import for disposal. 761.93 Section 761.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE...
40 CFR 761.366 - Cleanup equipment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Cleanup equipment. 761.366 Section 761.366 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Double...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Ryan; Marger, Bernard L.; Chiu, Ailsa
During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on COTS surveys & exploratory prototyping related to the Object Storage & Distribution (OSD) mechanism, and the processing control software infrastructure. This report summarizes the E2 prototyping work.
EPICS as a MARTe Configuration Environment
NASA Astrophysics Data System (ADS)
Valcarcel, Daniel F.; Barbalace, Antonio; Neto, André; Duarte, André S.; Alves, Diogo; Carvalho, Bernardo B.; Carvalho, Pedro J.; Sousa, Jorge; Fernandes, Horácio; Goncalves, Bruno; Sartori, Filippo; Manduchi, Gabriele
2011-08-01
The Multithreaded Application Real-Time executor (MARTe) software provides an environment for the hard real-time execution of codes while leveraging a standardized algorithm development process. The Experimental Physics and Industrial Control System (EPICS) software allows the deployment and remote monitoring of networked control systems. Channel Access (CA) is the protocol that enables the communication between EPICS distributed components. It allows to set and monitor process variables across the network belonging to different systems. The COntrol and Data Acquisition and Communication (CODAC) system for the ITER Tokamak will be EPICS based and will be used to monitor and live configure the plant controllers. The reconfiguration capability in a hard real-time system requires strict latencies from the request to the actuation and it is a key element in the design of the distributed control algorithm. Presently, MARTe and its objects are configured using a well-defined structured language. After each configuration, all objects are destroyed and the system rebuilt, following the strong hard real-time rule that a real-time system in online mode must behave in a strictly deterministic fashion. This paper presents the design and considerations to use MARTe as a plant controller and enable it to be EPICS monitorable and configurable without disturbing the execution at any time, in particular during a plasma discharge. The solutions designed for this will be presented and discussed.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data
NASA Astrophysics Data System (ADS)
Stockhause, M.; Höck, H.; Toussaint, F.; Lautenschlager, M.
2012-08-01
The preservation of data in a high state of quality which is suitable for interdisciplinary use is one of the most pressing and challenging current issues in long-term archiving. For high volume data such as climate model data, the data and data replica are no longer stored centrally but distributed over several local data repositories, e.g. the data of the Climate Model Intercomparison Project Phase 5 (CMIP5). The most important part of the data is to be archived, assigned a DOI, and published according to the World Data Center for Climate's (WDCC) application of the DataCite regulations. The integrated part of WDCC's data publication process, the data quality assessment, was adapted to the requirements of a federated data infrastructure. A concept of a distributed and federated quality assessment procedure was developed, in which the workload and responsibility for quality control is shared between the three primary CMIP5 data centers: Program for Climate Model Diagnosis and Intercomparison (PCMDI), British Atmospheric Data Centre (BADC), and WDCC. This distributed quality control concept, its pilot implementation for CMIP5, and first experiences are presented. The distributed quality control approach is capable of identifying data inconsistencies and to make quality results immediately available for data creators, data users and data infrastructure managers. Continuous publication of new data versions and slow data replication prevents the quality control from check completion. This together with ongoing developments of the data and metadata infrastructure requires adaptations in code and concept of the distributed quality control approach.
NASA Technical Reports Server (NTRS)
Wieland, P. O.
2005-01-01
Human exploration and utilization of space requires habitats to provide appropriate conditions for working and living. These conditions are provided by environmental control and life support systems (ECLSS) that ensure appropriate atmosphere composition, pressure, and temperature; manage and distribute water, process waste matter, provide fire detection and suppression; and other functions as necessary. The tables in appendix I of NASA RP 1324 "Designing for Human Presence in Space" summarize the life support functions and processes used onboard U.S. and U.S.S.R/Russian space habitats. These tables have been updated to include information on thermal control methods and to provide additional information on the ECLS systems.
NASA Technical Reports Server (NTRS)
Beckham, W. S., Jr.; Keune, F. A.
1974-01-01
The MIUS (Modular Integrated Utility System) concept is to be an energy-conserving, economically feasible, integrated community utility system to provide five necessary services: electricity generation, space heating and air conditioning, solid waste processing, liquid waste processing, and residential water purification. The MIST (MIUS Integration and Subsystem Test) integrated system testbed constructed at the Johnson Space Center in Houston includes subsystems for power generation, heating, ventilation, and air conditioning (HVAC), wastewater management, solid waste management, and control and monitoring. The key design issues under study include thermal integration and distribution techniques, thermal storage, integration of subsystems controls and displays, incinerator performance, effluent characteristics, and odor control.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
Computer hardware and software for robotic control
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1987-01-01
The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.
Attitude dynamics and control of a spacecraft using shifting mass distribution
NASA Astrophysics Data System (ADS)
Ahn, Young Tae
Spacecraft need specific attitude control methods that depend on the mission type or special tasks. The dynamics and the attitude control of a spacecraft with a shifting mass distribution within the system are examined. The behavior and use of conventional attitude control actuators are widely developed and performing at the present time. However, the advantage of a shifting mass distribution concept can complement spacecraft attitude control, save mass, and extend a satellite's life. This can be adopted in practice by moving mass from one tank to another, similar to what an airplane does to balance weight. Using this shifting mass distribution concept, in conjunction with other attitude control devices, can augment the three-axis attitude control process. Shifting mass involves changing the center-of-mass of the system, and/or changing the moments of inertia of the system, which then ultimately can change the attitude behavior of the system. This dissertation consists of two parts. First, the equations of motion for the shifting mass concept (also known as morphing) are developed. They are tested for their effects on attitude control by showing how shifting the mass changes the spacecraft's attitude behavior. Second, a method for optimal mass redistribution is shown using a combinatorial optimization theory under constraints. It closes with a simple example demonstrating an optimal reconfiguration. The procedure of optimal reconfiguration from one mass distribution to another to accomplish attitude control has been demonstrated for several simple examples. Mass shifting could work as an attitude controller for fine-tuning attitude behavior in small satellites. Various constraints can be applied for different situations, such as no mass shift between two tanks connected by a failed pipe or total amount of shifted mass per pipe being set for the time optimum solution. Euler angle changes influenced by the mass reconfiguration are accomplished while stability conditions are satisfied. In order to increase the accuracy, generally, more than two control systems are installed in a satellite. Combination with another actuator will be examined to fulfill the full attitude control maneuver. Future work can also include more realistic spacecraft design and operational considerations on the behavior of this type of control system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu
In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less
A convergent model for distributed processing of Big Sensor Data in urban engineering networks
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.
2017-01-01
The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.
NASA Astrophysics Data System (ADS)
Sonam; Jain, Vikrant
2018-03-01
Long profiles of rivers provide a platform to analyse interaction between geological and geomorphic processes operating at different time scales. Identification of an appropriate model for river long profile becomes important in order to establish a quantitative relationship between the profile shape, its geomorphic effectiveness, and inherent geological characteristics. This work highlights the variability in the long profile shape of the Ganga River and its major tributaries, its impact on stream power distribution pattern, and role of the geological controls on it. Long profile shapes are represented by the sum of two exponential functions through the curve fitting method. We have shown that coefficients of river long profile equations are governed by the geological characteristics of subbasins. These equations further define the spatial distribution pattern of stream power and help to understand stream power variability in different geological terrains. Spatial distribution of stream power in different geological terrains successfully explains spatial variability in geomorphic processes within the Himalayan hinterland area. In general, the stream power peaks of larger rivers lie in the Higher Himalaya, and rivers in the eastern hinterland area are characterised by the highest magnitude of stream power.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.
Fault architecture and deformation processes within poorly lithified rift sediments, Central Greece
NASA Astrophysics Data System (ADS)
Loveless, Sian; Bense, Victor; Turner, Jenni
2011-11-01
Deformation mechanisms and resultant fault architecture are primary controls on the permeability of faults in poorly lithified sediments. We characterise fault architecture using outcrop studies, hand samples, thin sections and grain-size data from a minor (1-10 m displacement) normal-fault array exposed within Gulf of Corinth rift sediments, Central Greece. These faults are dominated by mixed zones with poorly developed fault cores and damage zones. In poorly lithified sediment deformation is distributed across the mixed zone as beds are entrained and smeared. We find particulate flow aided by limited distributed cataclasis to be the primary deformation mechanism. Deformation may be localised in more competent sediments. Stratigraphic variations in sediment competency, and the subsequent alternating distributed and localised strain causes complexities within the mixed zone such as undeformed blocks or lenses of cohesive sediment, or asperities at the mixed zone/protolith boundary. Fault tip bifurcation and asperity removal are important processes in the evolution of these fault zones. Our results indicate that fault zone architecture and thus permeability is controlled by a range of factors including lithology, stratigraphy, cementation history and fault evolution, and that minor faults in poorly lithified sediment may significantly impact subsurface fluid flow.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
Arora, Harpreet Singh; Mridha, Sanghita; Grewal, Harpreet Singh; Singh, Harpreet; Hofmann, Douglas C; Mukherjee, Sundeep
2014-01-01
We demonstrate the refinement and uniform distribution of the crystalline dendritic phase by friction stir processing (FSP) of titanium based in situ ductile-phase reinforced metallic glass composite. The average size of the dendrites was reduced by almost a factor of five (from 24 μm to 5 μm) for the highest tool rotational speed of 900 rpm. The large inter-connected dendrites become more fragmented with increased circularity after processing. The changes in thermal characteristics were measured by differential scanning calorimetry. The reduction in crystallization enthalpy after processing suggests partial devitrification due to the high strain plastic deformation. FSP resulted in increased hardness and modulus for both the amorphous matrix and the crystalline phase. This is explained by interaction of shear bands in amorphous matrix with the strain-hardened dendritic phase. Our approach offers a new strategy for microstructural design in metallic glass composites. PMID:27877687
Arora, Harpreet Singh; Mridha, Sanghita; Grewal, Harpreet Singh; Singh, Harpreet; Hofmann, Douglas C; Mukherjee, Sundeep
2014-06-01
We demonstrate the refinement and uniform distribution of the crystalline dendritic phase by friction stir processing (FSP) of titanium based in situ ductile-phase reinforced metallic glass composite. The average size of the dendrites was reduced by almost a factor of five (from 24 μ m to 5 μ m) for the highest tool rotational speed of 900 rpm. The large inter-connected dendrites become more fragmented with increased circularity after processing. The changes in thermal characteristics were measured by differential scanning calorimetry. The reduction in crystallization enthalpy after processing suggests partial devitrification due to the high strain plastic deformation. FSP resulted in increased hardness and modulus for both the amorphous matrix and the crystalline phase. This is explained by interaction of shear bands in amorphous matrix with the strain-hardened dendritic phase. Our approach offers a new strategy for microstructural design in metallic glass composites.
Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping
2018-03-01
This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Feedback control in deep drawing based on experimental datasets
NASA Astrophysics Data System (ADS)
Fischer, P.; Heingärtner, J.; Aichholzer, W.; Hortig, D.; Hora, P.
2017-09-01
In large-scale production of deep drawing parts, like in automotive industry, the effects of scattering material properties as well as warming of the tools have a significant impact on the drawing result. In the scope of the work, an approach is presented to minimize the influence of these effects on part quality by optically measuring the draw-in of each part and adjusting the settings of the press to keep the strain distribution, which is represented by the draw-in, inside a certain limit. For the design of the control algorithm, a design of experiments for in-line tests is used to quantify the influence of the blank holder force as well as the force distribution on the draw-in. The results of this experimental dataset are used to model the process behavior. Based on this model, a feedback control loop is designed. Finally, the performance of the control algorithm is validated in the production line.
NASA Astrophysics Data System (ADS)
García, T.; Velo, A.; Fernandez-Bastero, S.; Gago-Duport, L.; Santos, A.; Alejo, I.; Vilas, F.
2005-02-01
This paper examines the linkages between the space-distribution of grain sizes and the relative percentage of the amount of mineral species that result from the mixing process of siliciclastic and carbonate sediments at the Ria de Vigo (NW of Spain). The space-distribution of minerals was initially determined, starting from a detailed mineralogical study based on XRD-Rietveld analysis of the superficial sediments. Correlations between the maps obtained for grain sizes, average fractions of either siliciclastic or carbonates, as well as for individual-minerals, were further stabilised. From this analysis, spatially organized patterns were found between carbonates and several minerals involved in the siliciclastic fraction. In particular, a coupled behaviour is observed between plagioclases and carbonates, in terms of their relative percentage amounts and the grain size distribution. In order to explain these results a conceptual model is proposed, based on the interplay between chemical processes at the seawater-sediment interface and hydrodynamical factors. This model suggests the existence of chemical control mechanisms that, by selective processes of dissolution-crystallization, constrain the mixed environment's long-term evolution, inducing the formation of self-organized sedimentary patterns.
The Stress-Dependent Activation Parameters for Dislocation Nucleation in Molybdenum Nanoparticles.
Chachamovitz, Doron; Mordehai, Dan
2018-03-02
Many specimens at the nanoscale are pristine of dislocations, line defects which are the main carriers of plasticity. As a result, they exhibit extremely high strengths which are dislocation-nucleation controlled. Since nucleation is a thermally activated process, it is essential to quantify the stress-dependent activation parameters for dislocation nucleation in order to study the strength of specimens at the nanoscale and its distribution. In this work, we calculate the strength of Mo nanoparticles in molecular dynamics simulations and we propose a method to extract the activation free-energy barrier for dislocation nucleation from the distribution of the results. We show that by deforming the nanoparticles at a constant strain rate, their strength distribution can be approximated by a normal distribution, from which the activation volumes at different stresses and temperatures are calculated directly. We found that the activation energy dependency on the stress near spontaneous nucleation conditions obeys a power-law with a critical exponent of approximately 3/2, which is in accordance with critical exponents found in other thermally activated processes but never for dislocation nucleation. Additionally, significant activation entropies were calculated. Finally, we generalize the approach to calculate the activation parameters for other driving-force dependent thermally activated processes.
Organization of descending neurons in Drosophila melanogaster
Hsu, Cynthia T.; Bhandawat, Vikas
2016-01-01
Neural processing in the brain controls behavior through descending neurons (DNs) - neurons which carry signals from the brain to the spinal cord (or thoracic ganglia in insects). Because DNs arise from multiple circuits in the brain, the numerical simplicity and availability of genetic tools make Drosophila a tractable model for understanding descending motor control. As a first step towards a comprehensive study of descending motor control, here we estimate the number and distribution of DNs in the Drosophila brain. We labeled DNs by backfilling them with dextran dye applied to the neck connective and estimated that there are ~1100 DNs distributed in 6 clusters in Drosophila. To assess the distribution of DNs by neurotransmitters, we labeled DNs in flies in which neurons expressing the major neurotransmitters were also labeled. We found DNs belonging to every neurotransmitter class we tested: acetylcholine, GABA, glutamate, serotonin, dopamine and octopamine. Both the major excitatory neurotransmitter (acetylcholine) and the major inhibitory neurotransmitter (GABA) are employed equally; this stands in contrast to vertebrate DNs which are predominantly excitatory. By comparing the distribution of DNs in Drosophila to those reported previously in other insects, we conclude that the organization of DNs in insects is highly conserved. PMID:26837716
Organization of descending neurons in Drosophila melanogaster.
Hsu, Cynthia T; Bhandawat, Vikas
2016-02-03
Neural processing in the brain controls behavior through descending neurons (DNs) - neurons which carry signals from the brain to the spinal cord (or thoracic ganglia in insects). Because DNs arise from multiple circuits in the brain, the numerical simplicity and availability of genetic tools make Drosophila a tractable model for understanding descending motor control. As a first step towards a comprehensive study of descending motor control, here we estimate the number and distribution of DNs in the Drosophila brain. We labeled DNs by backfilling them with dextran dye applied to the neck connective and estimated that there are ~1100 DNs distributed in 6 clusters in Drosophila. To assess the distribution of DNs by neurotransmitters, we labeled DNs in flies in which neurons expressing the major neurotransmitters were also labeled. We found DNs belonging to every neurotransmitter class we tested: acetylcholine, GABA, glutamate, serotonin, dopamine and octopamine. Both the major excitatory neurotransmitter (acetylcholine) and the major inhibitory neurotransmitter (GABA) are employed equally; this stands in contrast to vertebrate DNs which are predominantly excitatory. By comparing the distribution of DNs in Drosophila to those reported previously in other insects, we conclude that the organization of DNs in insects is highly conserved.
Villani, N; Gérard, K; Marchesi, V; Huger, S; François, P; Noël, A
2010-06-01
The first purpose of this study was to illustrate the contribution of statistical process control for a better security in intensity modulated radiotherapy (IMRT) treatments. This improvement is possible by controlling the dose delivery process, characterized by pretreatment quality control results. So, it is necessary to put under control portal dosimetry measurements (currently, the ionisation chamber measurements were already monitored by statistical process control thanks to statistical process control tools). The second objective was to state whether it is possible to substitute ionisation chamber with portal dosimetry in order to optimize time devoted to pretreatment quality control. At Alexis-Vautrin center, pretreatment quality controls in IMRT for prostate and head and neck treatments were performed for each beam of each patient. These controls were made with an ionisation chamber, which is the reference detector for the absolute dose measurement, and with portal dosimetry for the verification of dose distribution. Statistical process control is a statistical analysis method, coming from industry, used to control and improve the studied process quality. It uses graphic tools as control maps to follow-up process, warning the operator in case of failure, and quantitative tools to evaluate the process toward its ability to respect guidelines: this is the capability study. The study was performed on 450 head and neck beams and on 100 prostate beams. Control charts, showing drifts, both slow and weak, and also both strong and fast, of mean and standard deviation have been established and have shown special cause introduced (manual shift of the leaf gap of the multileaf collimator). Correlation between dose measured at one point, given with the EPID and the ionisation chamber has been evaluated at more than 97% and disagreement cases between the two measurements were identified. The study allowed to demonstrate the feasibility to reduce the time devoted to pretreatment controls, by substituting the ionisation chamber's measurements with those performed with EPID, and also that a statistical process control monitoring of data brought security guarantee. 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Integrated Tools for Future Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, Randy; Saus, Joseph
2013-01-01
Turbine engines are highly complex mechanical systems that are becoming increasingly dependent on control technologies to achieve system performance and safety metrics. However, the contribution of controls to these measurable system objectives is difficult to quantify due to a lack of tools capable of informing the decision makers. This shortcoming hinders technology insertion in the engine design process. NASA Glenn Research Center is developing a Hardware-inthe- Loop (HIL) platform and analysis tool set that will serve as a focal point for new control technologies, especially those related to the hardware development and integration of distributed engine control. The HIL platform is intended to enable rapid and detailed evaluation of new engine control applications, from conceptual design through hardware development, in order to quantify their impact on engine systems. This paper discusses the complex interactions of the control system, within the context of the larger engine system, and how new control technologies are changing that paradigm. The conceptual design of the new HIL platform is then described as a primary tool to address those interactions and how it will help feed the insertion of new technologies into future engine systems.
Exodus - Distributed artificial intelligence for Shuttle firing rooms
NASA Technical Reports Server (NTRS)
Heard, Astrid E.
1990-01-01
This paper describes the Expert System for Operations Distributed Users (EXODUS), a knowledge-based artificial intelligence system developed for the four Firing Rooms at the Kennedy Space Center. EXODUS is used by the Shuttle engineers and test conductors to monitor and control the sequence of tasks required for processing and launching Shuttle vehicles. In this paper, attention is given to the goals and the design of EXODUS, the operational requirements, and the extensibility of the technology.
A Taxonomy of Attacks on the DNP3 Protocol
NASA Astrophysics Data System (ADS)
East, Samuel; Butts, Jonathan; Papa, Mauricio; Shenoi, Sujeet
Distributed Network Protocol (DNP3) is the predominant SCADA protocol in the energy sector - more than 75% of North American electric utilities currently use DNP3 for industrial control applications. This paper presents a taxonomy of attacks on the protocol. The attacks are classified based on targets (control center, outstation devices and network/communication paths) and threat categories (interception, interruption, modification and fabrication). To facilitate risk analysis and mitigation strategies, the attacks are associated with the specific DNP3 protocol layers they exploit. Also, the operational impact of the attacks is categorized in terms of three key SCADA objectives: process confi- dentiality, process awareness and process control. The attack taxonomy clarifies the nature and scope of the threats to DNP3 systems, and can provide insights into the relative costs and benefits of implementing mitigation strategies.
INcreasing Security and Protection through Infrastructure REsilience: The INSPIRE Project
NASA Astrophysics Data System (ADS)
D'Antonio, Salvatore; Romano, Luigi; Khelil, Abdelmajid; Suri, Neeraj
The INSPIRE project aims at enhancing the European potential in the field of security by ensuring the protection of critical information infrastructures through (a) the identification of their vulnerabilities and (b) the development of innovative techniques for securing networked process control systems. To increase the resilience of such systems INSPIRE will develop traffic engineering algorithms, diagnostic processes and self-reconfigurable architectures along with recovery techniques. Hence, the core idea of the INSPIRE project is to protect critical information infrastructures by appropriately configuring, managing, and securing the communication network which interconnects the distributed control systems. A working prototype will be implemented as a final demonstrator of selected scenarios. Controls/Communication Experts will support project partners in the validation and demonstration activities. INSPIRE will also contribute to standardization process in order to foster multi-operator interoperability and coordinated strategies for securing lifeline systems.
NASA Astrophysics Data System (ADS)
Vallage, Amaury; Klinger, Yann; Grandin, Raphael; Delorme, Arthur; Pierrot-Deseilligny, Marc
2016-04-01
The understanding of earthquake processes and the interaction of earthquake rupture with Earth's free surface relies on the resolution of the observations. Recent and detailed post-earthquake measurements bring new insights on shallow mechanical behavior of rupture processes as it becomes possible to measure and locate surficial deformation distribution. The 2013 Mw 7.7 Balochistan earthquake, Pakistan, offers a nice opportunity to comprehend where and why surficial deformation might differs from at-depth localized slip. This earthquake ruptured the Hoshab fault over 200 km; the motion was mainly left lateral with a small and discontinuous vertical component in the southern part of the rupture. Using images with the finest resolution currently available, we measured the surface displacement amplitude and its orientation at the ground surface (including the numerous tensile cracks). We combined these measurements with the 1:500 scale ground rupture map to focus on the behavior of the frontal rupture in the area where deformation distributes. Comparison with orientations of inherited tectonic structures, visible in older rocks formation surrounding the actual 2013 rupture, shows the control exercised by such structures on co-seismic rupture distribution. Such observation raises the question on how pre-existing tectonic structures in a medium, mapped in several seismically active places around the globe; can control the co-seismic distribution of the deformation during earthquakes.
1998-01-14
The Photovoltaic Module 1 Integrated Equipment Assembly (IEA) is moved past Node 1, seen at left, of the International Space Station (ISS) in Kennedy Space Center’s Space Station Processing Facility (SSPF). The IEA will be processed at the SSPF for flight on STS-97, scheduled for launch in April 1999. The IEA is one of four integral units designed to generate, distribute, and store power for the ISS. It will carry solar arrays, power storage batteries, power control units, and a thermal control system. The 16-foot-long, 16,850-pound unit is now undergoing preflight preparations in the SSPF
Statistical process control: a practical application for hospitals.
VanderVeen, L M
1992-01-01
A six-step plan based on using statistics was designed to improve quality in the central processing and distribution department of a 223-bed hospital in Oakland, CA. This article describes how the plan was implemented sequentially, starting with the crucial first step of obtaining administrative support. The QI project succeeded in overcoming beginners' fear of statistics and in training both managers and staff to use inspection checklists, Pareto charts, cause-and-effect diagrams, and control charts. The best outcome of the program was the increased commitment to quality improvement by the members of the department.
Regulation of distribution network business
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roman, J.; Gomez, T.; Munoz, A.
1999-04-01
The traditional distribution function actually comprises two separate activities: distribution network and retailing. Retailing, which is also termed supply, consists of trading electricity at the wholesale level and selling it to the end users. The distribution network business, or merely distribution, is a natural monopoly and it must be regulated. Increasing attention is presently being paid to the regulation of distribution pricing. Distribution pricing, comprises two major tasks: global remuneration of the distribution utility and tariff setting by allocation of the total costs among all the users of the network services. In this paper, the basic concepts for establishing themore » global remuneration of a distribution utility are presented. A remuneration scheme which recognizes adequate investment and operation costs, promotes losses reduction and incentivates the control of the quality of service level is proposed. Efficient investment and operation costs are calculated by using different types of strategic planning and regression analysis models. Application examples that have been used during the distribution regulation process in Spain are also presented.« less
NASA Astrophysics Data System (ADS)
Lininger, K. B.; Wohl, E.; Rose, J. R.
2018-03-01
Floodplains accumulate and store organic carbon (OC) and release OC to rivers, but studies of floodplain soil OC come from small rivers or small spatial extents on larger rivers in temperate latitudes. Warming climate is causing substantial change in geomorphic process and OC fluxes in high latitude rivers. We investigate geomorphic controls on floodplain soil OC concentrations in active-layer mineral sediment in the Yukon Flats, interior Alaska. We characterize OC along the Yukon River and four tributaries in relation to geomorphic controls at the river basin, segment, and reach scales. Average OC concentration within floodplain soil is 2.8% (median = 2.2%). Statistical analyses indicate that OC varies among river basins, among planform types along a river depending on the geomorphic unit, and among geomorphic units. OC decreases with sample depth, suggesting that most OC accumulates via autochthonous inputs from floodplain vegetation. Floodplain and river characteristics, such as grain size, soil moisture, planform, migration rate, and riverine DOC concentrations, likely influence differences among rivers. Grain size, soil moisture, and age of surface likely influence differences among geomorphic units. Mean OC concentrations vary more among geomorphic units (wetlands = 5.1% versus bars = 2.0%) than among study rivers (Dall River = 3.8% versus Teedrinjik River = 2.3%), suggesting that reach-scale geomorphic processes more strongly control the spatial distribution of OC than basin-scale processes. Investigating differences at the basin and reach scale is necessary to accurately assess the amount and distribution of floodplain soil OC, as well as the geomorphic controls on OC.
Cheng, Xinfeng; Zhang, Min; Xu, Baoguo; Adhikari, Benu; Sun, Jincai
2015-11-01
Ultrasonic processing is a novel and promising technology in food industry. The propagation of ultrasound in a medium generates various physical and chemical effects and these effects have been harnessed to improve the efficiency of various food processing operations. Ultrasound has also been used in food quality control as diagnostic technology. This article provides an overview of recent developments related to the application of ultrasound in low temperature and closely related processes such as freezing, thawing, freeze concentration and freeze drying. The applications of high intensity ultrasound to improve the efficiency of freezing process, to control the size and size distribution of ice crystals and to improve the quality of frozen foods have been discussed in considerable detail. The use of low intensity ultrasound in monitoring the ice content and to monitor the progress of freezing process has also been highlighted. Copyright © 2015 Elsevier B.V. All rights reserved.
KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM
NASA Technical Reports Server (NTRS)
Hui, J.
1994-01-01
KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation.
40 CFR 763.165 - Manufacture and importation prohibitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) TOXIC SUBSTANCES CONTROL ACT ASBESTOS Prohibition of the Manufacture, Importation, Processing, and Distribution in Commerce of Certain Asbestos-Containing Products; Labeling Requirements § 763.165 Manufacture... following asbestos-containing products, either for use in the United States or for export: flooring felt and...
40 CFR 761.369 - Pre-cleaning the surface.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Pre-cleaning the surface. 761.369 Section 761.369 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT POLYCHLORINATED BIPHENYLS (PCBs) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE...
Code of Federal Regulations, 2010 CFR
2010-04-01
... TREASURY LIQUORS DISTRIBUTION AND USE OF DENATURED ALCOHOL AND RUM Formulas and Statements of Process... article made in accordance with any approved general-use formula prescribed by §§ 20.112 through 20.119... the Office of Management and Budget under control number 1512-0336) ...
NASA Astrophysics Data System (ADS)
Niu, Xiaoliang; Yuan, Fen; Huang, Shanguo; Guo, Bingli; Gu, Wanyi
2011-12-01
A Dynamic clustering scheme based on coordination of management and control is proposed to reduce network congestion rate and improve the blocking performance of hierarchical routing in Multi-layer and Multi-region intelligent optical network. Its implement relies on mobile agent (MA) technology, which has the advantages of efficiency, flexibility, functional and scalability. The paper's major contribution is to adjust dynamically domain when the performance of working network isn't in ideal status. And the incorporation of centralized NMS and distributed MA control technology migrate computing process to control plane node which releases the burden of NMS and improves process efficiently. Experiments are conducted on Multi-layer and multi-region Simulation Platform for Optical Network (MSPON) to assess the performance of the scheme.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.
DeSouza, Guilherme N; Kak, Avinash C
2004-10-01
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."
Ejecta Production and Properties
NASA Astrophysics Data System (ADS)
Williams, Robin
2017-06-01
The interaction of an internal shock with the free surface of a dense material leads to the production of jets of particulate material from the surface into its environment. Understanding the processes which control the production of these jets -- both their occurrence, and properties such as the mass, velocity, and particle size distribution of material injected -- has been a topic of active research at AWE for over 50 years. I will discuss the effect of material physics, such as strength and spall, on the production of ejecta, drawing on experimental history and recent calculations, and consider the processes which determine the distribution of particle sizes which result as ejecta jets break up. British Crown Owned Copyright 2017/AWE.
Etching nano-holes in silicon carbide using catalytic platinum nano-particles
NASA Astrophysics Data System (ADS)
Moyen, E.; Wulfhekel, W.; Lee, W.; Leycuras, A.; Nielsch, K.; Gösele, U.; Hanbücken, M.
2006-09-01
The catalytic reaction of platinum during a hydrogen etching process has been used to perform controlled vertical nanopatterning of silicon carbide substrates. A first set of experiments was performed with platinum powder randomly distributed on the SiC surface. Subsequent hydrogen etching in a hot wall reactor caused local atomic hydrogen production at the catalyst resulting in local SiC etching and hole formation. Secondly, a highly regular and monosized distribution of Pt was obtained by sputter deposition of Pt through an Au membrane serving as a contact mask. After the lift-off of the mask, the hydrogen etching revealed the onset of well-controlled vertical patterned holes on the SiC surface.
Two-dimensional thermography image retrieval from zig-zag scanned data with TZ-SCAN
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Yamasaki, Ryohei; Arai, Kohei
2008-10-01
TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.
INO340 telescope control system: middleware requirements, design, and evaluation
NASA Astrophysics Data System (ADS)
Shalchian, Hengameh; Ravanmehr, Reza
2016-07-01
The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time (soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS) standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to ensure that it meets our requirements.
Decoupling Coupled Constraints Through Utility Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, N; Marden, JR
2014-08-01
Several multiagent systems exemplify the need for establishing distributed control laws that ensure the resulting agents' collective behavior satisfies a given coupled constraint. This technical note focuses on the design of such control laws through a game-theoretic framework. In particular, this technical note provides two systematic methodologies for the design of local agent objective functions that guarantee all resulting Nash equilibria optimize the system level objective while also satisfying a given coupled constraint. Furthermore, the designed local agent objective functions fit into the framework of state based potential games. Consequently, one can appeal to existing results in game-theoretic learning tomore » derive a distributed process that guarantees the agents will reach such an equilibrium.« less
Primary and secondary fragmentation of crystal-bearing intermediate magma
NASA Astrophysics Data System (ADS)
Jones, Thomas J.; McNamara, Keri; Eychenne, Julia; Rust, Alison C.; Cashman, Katharine V.; Scheu, Bettina; Edwards, Robyn
2016-11-01
Crystal-rich intermediate magmas are subjected to both primary and secondary fragmentation processes, each of which may produce texturally distinct tephra. Of particular interest for volcanic hazards is the extent to which each process contributes ash to volcanic plumes. One way to address this question is by fragmenting pyroclasts under controlled conditions. We fragmented pumice samples from Soufriere Hills Volcano (SHV), Montserrat, by three methods: rapid decompression in a shock tube-like apparatus, impact by a falling piston, and milling in a ball mill. Grain size distributions of the products reveal that all three mechanisms produce fractal breakage patterns, and that the fractal dimension increases from a minimum of 2.1 for decompression fragmentation (primary fragmentation) to a maximum of 2.7 by repeated impact (secondary fragmentation). To assess the details of the fragmentation process, we quantified the shape, texture and components of constituent ash particles. Ash shape analysis shows that the axial ratio increases during milling and that particle convexity increases with repeated impacts. We also quantify the extent to which the matrix is separated from the crystals, which shows that secondary processes efficiently remove adhering matrix from crystals, particularly during milling (abrasion). Furthermore, measurements of crystal size distributions before (using x-ray computed tomography) and after (by componentry of individual grain size classes) decompression-driven fragmentation show not only that crystals influence particular size fractions across the total grain size distribution, but also that free crystals are smaller in the fragmented material than in the original pumice clast. Taken together, our results confirm previous work showing both the control of initial texture on the primary fragmentation process and the contributions of secondary processes to ash formation. Critically, however, our extension of previous analyses to characterisation of shape, texture and componentry provides new analytical tools that can be used to assess contributions of secondary processes to ash deposits of uncertain or mixed origin. We illustrate this application with examples from SHV deposits.