The Design of Feedback Control Systems Containing a Saturation Type Nonlinearity
NASA Technical Reports Server (NTRS)
Schmidt, Stanley F.; Harper, Eleanor V.
1960-01-01
A derivation of the optimum response for a step input for plant transfer functions which have an unstable pole and further data on plants with a single zero in the left half of the s plane. The calculated data are presented tabulated in normalized form. Optimum control systems are considered. The optimum system is defined as one which keeps the error as small as possible regardless of the input, under the constraint that the input to the plant (or controlled system) is limited. Intuitive arguments show that in the case where only the error can be sensed directly, the optimum system is obtained from the optimum relay or on-off solution. References to known solutions are presented. For the case when the system is of the sampled-data type, arguments are presented which indicate the optimum sampled-data system may be extremely difficult if not impossible to realize practically except for very simple plant transfer functions. Two examples of aircraft attitude autopilots are presented, one for a statically stable and the other for a statically unstable airframe. The rate of change of elevator motion is assumed limited for these examples. It is shown that by use of nonlinear design techniques described in NASA TN D-20 one can obtain near optimum response for step inputs and reason- able response to sine wave inputs for either case. Also, the nonlinear design prevents inputs from driving the system unstable for either case.
Influence of operating conditions on the optimum design of electric vehicle battery cooling plates
NASA Astrophysics Data System (ADS)
Jarrett, Anthony; Kim, Il Yong
2014-01-01
The efficiency of cooling plates for electric vehicle batteries can be improved by optimizing the geometry of internal fluid channels. In practical operation, a cooling plate is exposed to a range of operating conditions dictated by the battery, environment, and driving behaviour. To formulate an efficient cooling plate design process, the optimum design sensitivity with respect to each boundary condition is desired. This determines which operating conditions must be represented in the design process, and therefore the complexity of designing for multiple operating conditions. The objective of this study is to determine the influence of different operating conditions on the optimum cooling plate design. Three important performance measures were considered: temperature uniformity, mean temperature, and pressure drop. It was found that of these three, temperature uniformity was most sensitive to the operating conditions, especially with respect to the distribution of the input heat flux, and also to the coolant flow rate. An additional focus of the study was the distribution of heat generated by the battery cell: while it is easier to assume that heat is generated uniformly, by using an accurate distribution for design optimization, this study found that cooling plate performance could be significantly improved.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Optimized distributed computing environment for mask data preparation
NASA Astrophysics Data System (ADS)
Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung
2005-11-01
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.
1961-01-01
The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.
Williams, C B; Bennett, G L
1995-10-01
A bioeconomic model was developed to predict slaughter end points of different genotypes of feeder cattle, where profit/rotation and profit/day were maximized. Growth, feed intake, and carcass weight and composition were simulated for 17 biological types of steers. Distribution of carcass weight and proportion in four USDA quality and five USDA yield grades were obtained from predicted carcass weights and composition. Average carcass value for each genotype was calculated from these distributions under four carcass pricing systems that varied from value determined on quality grade alone to value determined on yield grade alone. Under profitable market conditions, rotation length was shorter and carcass weights lighter when the producer's goal was maximum profit/day, compared with maximum profit/rotation. A carcass value system based on yield grade alone resulted in greater profit/rotation and in lighter and leaner carcasses than a system based on quality grade alone. High correlations ( > .97) were obtained between breed profits obtained with different sets of input/output prices and carcass price discount weight ranges. This suggests that breed rankings on the basis of breed profits may not be sensitive to changes in input/output market prices. Steers that were on a grower-stocker system had leaner carcasses, heavier optimum carcass weight, greater profits, and less variation in optimum carcass weights between genotypes than steers that were started on a high-energy finishing diet at weaning. Overall results suggest that breed choices may change with different carcass grading and value systems and postweaning production systems. This model has potential to provide decision support in marketing fed cattle.
Numerical analysis of the heat source characteristics of a two-electrode TIG arc
NASA Astrophysics Data System (ADS)
Ogino, Y.; Hirata, Y.; Nomura, K.
2011-06-01
Various kinds of multi-electrode welding processes are used to ensure high productivity in industrial fields such as shipbuilding, automotive manufacturing and pipe fabrication. However, it is difficult to obtain the optimum welding conditions for a specific product, because there are many operating parameters, and because welding phenomena are very complicated. In the present research, the heat source characteristics of a two-electrode TIG arc were numerically investigated using a 3D arc plasma model with a focus on the distance between the two electrodes. The arc plasma shape changed significantly, depending on the electrode spacing. The heat source characteristics, such as the heat input density and the arc pressure distribution, changed significantly when the electrode separation was varied. The maximum arc pressure of the two-electrode TIG arc was much lower than that of a single-electrode TIG. However, the total heat input of the two-electrode TIG arc was nearly constant and was independent of the electrode spacing. These heat source characteristics of the two-electrode TIG arc are useful for controlling the heat input distribution at a low arc pressure. Therefore, these results indicate the possibility of a heat source based on a two-electrode TIG arc that is capable of high heat input at low pressures.
Ultraviolet resources over Northern Eurasia.
Chubarova, Natalia; Zhdanova, Yekaterina
2013-10-05
We propose a new climatology of UV resources over Northern Eurasia, which includes the assessments of both detrimental (erythema) and positive (vitamin D synthesis) effects of ultraviolet radiation on human health. The UV resources are defined by using several classes and subclasses - UV deficiency, UV optimum, and UV excess - for 6 different skin types. To better quantifying the vitamin D irradiance threshold we accounted for an open body fraction S as a function of effective air temperature. The spatial and temporal distribution of UV resources was estimated by radiative transfer (RT) modeling (8 stream DISORT RT code) with 1×1° grid and monthly resolution. For this purpose special datasets of main input geophysical parameters (total ozone content, aerosol characteristics, surface UV albedo, UV cloud modification factor) have been created over the territory of Northern Eurasia. The new approaches were used to retrieve aerosol parameters and cloud modification factor in the UV spectral region. As a result, the UV resources were obtained for clear-sky and mean cloudy conditions for different skin types. We show that the distribution of UV deficiency, UV optimum and UV excess is regulated by various geophysical parameters (mainly, total ozone, cloudiness and open body fraction) and can significantly deviate from latitudinal dependence. We also show that the UV optimum conditions can be simultaneously observed for people with different skin types (for example, for 4-5 skin types at the same time in spring over Western Europe). These UV optimum conditions for different skin types occupy a much larger territory over Europe than that over Asia. Copyright © 2013 Elsevier B.V. All rights reserved.
User's Manual for Thermal Analysis Program of Axially Grooved Heat Pipe (HTGAP)
NASA Technical Reports Server (NTRS)
Kamotani, Y.
1978-01-01
A computer program that numerically predicts the steady state temperature distribution inside an axially grooved heat pipe wall for a given groove geometry and working fluid under various heat input and output modes is described. The program computes both evaporator and condenser film coefficients. The program is able to handle both axisymmetric and nonaxisymmetric heat transfer cases. Non-axisymmetric heat transfer results either from non-uniform input at the evaporator or non-uniform heat removal from the condenser, or from both. The presence of a liquid pool in the condenser region under one-g condition also causes non-axisymmetric heat transfer, and its effect on the pipe wall temperature distribution is included in the present program. The hydrodynamic aspect of an axially grooved heat pipe is studied in the Groove Analysis Program (GAP). The present thermal analysis program assumes that the GAP program (or other similar programs) is run first so that the heat transport limit and optimum fluid charge of the heat pipe are known a priori.
Optimum systems design with random input and output applied to solar water heating
NASA Astrophysics Data System (ADS)
Abdel-Malek, L. L.
1980-03-01
Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.
Lavado Contador, J F; Maneta, M; Schnabel, S
2006-10-01
The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.
A techno-economic model for optimum regeneration of surface mined land
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Manas K.; Sinha, Indra N.
2006-07-01
The recent global scenario in the mineral sector may be characterized by rising competitiveness, increasing production costs and a slump in market price. This has pushed the mineral sector in general and that in the developing countries in particular to a situation where the industry has a limited capacity to sustain unproductive costs. This, more often than not, results in a situation where the industry fails to ensure environmental safeguards during and after mineral extraction. The situation is conspicuous in the Indian coal mining industry where more than 73% production comes from surface operations. India has an ambitious power augmentation projection for the coming 10 years. A phenomenal increase in coal production is proposed from the power grade coalfields in India. One of the most likely fall-outs of land degradation due to mining in these areas would be significant reduction of agricultural and other important land-uses. Currently, backfilling costs are perceived as prohibitive and abandonment of land is the easy way out. This study attempts to provide mine planners with a mathematical model that distributes generated overburden at defined disposal options while ensuring maximization of backfilled land area at minimum direct and economic costs. Optimization has been accomplished by linear programming (LP) for optimum distribution of each year’s generated overburden. Previous year’s disposal quantity outputs are processed as one set of the inputs to the LP model for generation of current year’s disposal output. From various geo-mining inputs, site constants of the LP constraints are calculated. Arrived value of economic vectors, which guide the programming statement, decides the optimal overburden distribution in defined options. The case example (with model test run) indicates that overburden distribution is significantly sensitive to coal seam gradient. The model has universal applicability to cyclic system (shovel dumper combination) of opencast mining of stratified deposits.
Optimal control of diarrhea transmission in a flood evacuation zone
NASA Astrophysics Data System (ADS)
Erwina, N.; Aldila, D.; Soewono, E.
2014-03-01
Evacuation of residents and diarrhea disease outbreak in evacuation zone have become serious problem that frequently happened during flood periods. Limited clean water supply and infrastructure in evacuation zone contribute to a critical spread of diarrhea. Transmission of diarrhea disease can be reduced by controlling clean water supply and treating diarrhea patients properly. These treatments require significant amount of budget, which may not be fulfilled in the fields. In his paper, transmission of diarrhea disease in evacuation zone using SIRS model is presented as control optimum problem with clean water supply and rate of treated patients as input controls. Existence and stability of equilibrium points and sensitivity analysis are investigated analytically for constant input controls. Optimum clean water supply and rate of treatment are found using optimum control technique. Optimal results for transmission of diarrhea and the corresponding controls during the period of observation are simulated numerically. The optimum result shows that transmission of diarrhea disease can be controlled with proper combination of water supply and rate of treatment within allowable budget.
Calibration of a universal indicated turbulence system
NASA Technical Reports Server (NTRS)
Chapin, W. G.
1977-01-01
Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.
Han, Fang; Wang, Zhijie; Fan, Hong
2017-01-01
This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760
Application of Theodorsen's Theory to Propeller Design
NASA Technical Reports Server (NTRS)
Crigler, John L
1948-01-01
A theoretical analysis is presented for obtaining by use of Theodorsen's propeller theory the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition.The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.
Application of Theodorsen's theory to propeller design
NASA Technical Reports Server (NTRS)
Crigler, John L
1949-01-01
A theoretical analysis is presented for obtaining, by use of Theodorsen's propeller theory, the load distribution along a propeller radius to give the optimum propeller efficiency for any design condition. The efficiencies realized by designing for the optimum load distribution are given in graphs, and the optimum efficiency for any design condition may be read directly from the graph without any laborious calculations. Examples are included to illustrate the method of obtaining the optimum load distributions for both single-rotating and dual-rotating propellers.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
The assessment of UV resources over Northern Eurasia
NASA Astrophysics Data System (ADS)
Chubarova, Natalia; Zhdanova, Yekaterina
2013-05-01
The spatial and temporal distribution of UV resources was assessed over Northern Eurasia by using RT modeling (8 stream DISORT RT code) with 1×1 degree grid and month resolution. For this purpose a special dataset of main input geophysical parameters (total ozone content, aerosol characteristics, surface UV albedo, and UV cloud modification factor) has been developed. To define the UV resources both erythemally-weighted and vitamin D irradiances were used. In order to better quantify vitamin D irradiance threshold we accounted for a body exposure fraction S as a function of surface effective temperature. The UV resources are defined by using several classes and subclasses: UV deficiency, UV optimum, and UV excess. They were evaluated for clear and typical cloudy conditions for different skin types. We show that for typical cloudy conditions in winter (January) there are only few regions in Europe at the south of Spain (southward 43°N) with conditions of UV optimum for people with skin type 2 and no such conditions for people with skin type 4. In summer (July) UV optimum for skin 2 is observed northward 63°N with a boundary biased towards higher latitudes at the east, while for skin type 4 these conditions are observed over the most territory of Northern Eurasia.
Optimal actuator location within a morphing wing scissor mechanism configuration
NASA Astrophysics Data System (ADS)
Joo, James J.; Sanders, Brian; Johnson, Terrence; Frecker, Mary I.
2006-03-01
In this paper, the optimal location of a distributed network of actuators within a scissor wing mechanism is investigated. The analysis begins by developing a mechanical understanding of a single cell representation of the mechanism. This cell contains four linkages connected by pin joints, a single actuator, two springs to represent the bidirectional behavior of a flexible skin, and an external load. Equilibrium equations are developed using static analysis and the principle of virtual work equations. An objective function is developed to maximize the efficiency of the unit cell model. It is defined as useful work over input work. There are two constraints imposed on this problem. The first is placed on force transferred from the external source to the actuator. It should be less than the blocked actuator force. The other is to require the ratio of output displacement over input displacement, i.e., geometrical advantage (GA), of the cell to be larger than a prescribed value. Sequential quadratic programming is used to solve the optimization problem. This process suggests a systematic approach to identify an optimum location of an actuator and to avoid the selection of location by trial and error. Preliminary results show that optimum locations of an actuator can be selected out of feasible regions according to the requirements of the problem such as a higher GA, a higher efficiency, or a smaller transferred force from external force. Results include analysis of single and multiple cell wing structures and some experimental comparisons.
Optimum Design of LLC Resonant Converter using Inductance Ratio (Lm/Lr)
NASA Astrophysics Data System (ADS)
Palle, Kowstubha; Krishnaveni, K.; Ramesh Reddy, Kolli
2017-06-01
The main benefits of LLC resonant dc/dc converter over conventional series and parallel resonant converters are its light load regulation, less circulating currents, larger bandwidth for zero voltage switching, and less tuning of switching frequency for controlled output. An unique analytical tool, called fundamental harmonic approximation with peak gain adjustment is used for designing the converter. In this paper, an optimum design of the converter is proposed by considering three different design criterions with different values of inductance ratio (Lm/Lr) to achieve good efficiency at high input voltage. The optimum design includes the analysis in operating range, switching frequency range, primary side losses of a switch and stability. The analysis is carried out with simulation using the software tools like MATLAB and PSIM. The performance of the optimized design is demonstrated for a design specification of 12 V, 5 A output operating with an input voltage range of 300-400 V using FSFR 2100 IC of Texas instruments.
Degree Correlations Optimize Neuronal Network Sensitivity to Sub-Threshold Stimuli
Schmeltzer, Christian; Kihara, Alexandre Hiroaki; Sokolov, Igor Michailovitsch; Rüdiger, Sten
2015-01-01
Information processing in the brain crucially depends on the topology of the neuronal connections. We investigate how the topology influences the response of a population of leaky integrate-and-fire neurons to a stimulus. We devise a method to calculate firing rates from a self-consistent system of equations taking into account the degree distribution and degree correlations in the network. We show that assortative degree correlations strongly improve the sensitivity for weak stimuli and propose that such networks possess an advantage in signal processing. We moreover find that there exists an optimum in assortativity at an intermediate level leading to a maximum in input/output mutual information. PMID:26115374
Jahantigh, Nabi; Keshavarz, Ali; Mirzaei, Masoud
2015-01-01
The aim of this study is to determine optimum hybrid heating systems parameters, such as temperature, surface area of a radiant heater and vent area to have thermal comfort conditions. DOE, Factorial design method is used to determine the optimum values for input parameters. A 3D model of a virtual standing thermal manikin with real dimensions is considered in this study. Continuity, momentum, energy, species equations for turbulent flow and physiological equation for thermal comfort are numerically solved to study heat, moisture and flow field. K - ɛRNG Model is used for turbulence modeling and DO method is used for radiation effects. Numerical results have a good agreement with the experimental data reported in the literature. The effect of various combinations of inlet parameters on thermal comfort is considered. According to Pareto graph, some of these combinations that have significant effect on the thermal comfort require no more energy can be used as useful tools. A better symmetrical velocity distribution around the manikin is also presented in the hybrid system.
Optimum performance and potential flow field of hovering rotors
NASA Technical Reports Server (NTRS)
Wu, J. C.; Sigman, R. K.
1975-01-01
Rotor and propeller performance and induced potential flowfields were studied on the basis of a rotating actuator disk concept, with special emphasis on rotors hovering out of ground effect. A new theory for the optimum performance of rotors hovering OGE is developed and presented. An extended theory for the optimum performance of rotors and propellers in axial motion is also presented. Numerical results are presented for the optimum distributions of blade-bound circulation together with axial inflow and ultimate wake velocities for the hovering rotor over the range of thrust coefficient of interest in rotorcraft applications. Shapes of the stream tubes and of the velocities in the slipstream are obtained, using available methods, for optimum and off-optimum circulation distributions for rotors hovering in and out of ground effect. A number of explicit formulae useful in computing rotor and propeller induced flows are presented for stream functions and velocities due to distributions of circular vortices over axi-symmetric surfaces.
Optimum Suction Distribution for Transition Control
NASA Technical Reports Server (NTRS)
Balakumar, P.; Hall, P.
1996-01-01
The optimum suction distribution which gives the longest laminar region for a given total suction is computed. The goal here is to provide the designer with a method to find the best suction distribution subject to some overall constraint applied to the suction. We formulate the problem using the Lagrangian multiplier method with constraints. The resulting non-linear system of equations is solved using the Newton-Raphson technique. The computations are performed for a Blasius boundary layer on a flat-plate and crossflow cases. For the Blasius boundary layer, the optimum suction distribution peaks upstream of the maximum growth rate region and remains flat in the middle before it decreases to zero at the end of the transition point. For the stationary and travelling crossflow instability, the optimum suction peaks upstream of the maximum growth rate region and decreases gradually to zero.
NASA Astrophysics Data System (ADS)
Min'kov, L. L.; Shrager, É. R.
2015-03-01
A study has been made of ways of optimum distribution of particles of dispersed metal in the solid-propellant charge with a cylindrical central channel, which is firmly fastened to the case. The efficiency of combustion of this metal has been analyzed. Consideration has been given to the influence of the dynamic nonequilibrium of two-phase flow on the optimum distribution of metal particles in the indicated charge in the approximation of one-dimensionality of the flow field.
NASA Astrophysics Data System (ADS)
Behrens, Melanie K.; Pahnke, Katharina; Paffrath, Ronja; Schnetger, Bernhard; Brumsack, Hans-Jürgen
2018-03-01
Recent studies suggest that transport and water mass mixing may play a dominant role in controlling the distribution of dissolved rare earth element concentrations ([REE]) at least in parts of the North and South Atlantic and the Pacific Southern Ocean. Here we report vertically and spatially high-resolution profiles of dissolved REE concentrations ([REE]) along a NW-SE transect in the West Pacific and examine the processes affecting the [REE] distributions in this area. Surface water REE patterns reveal sources of trace element (TE) input near South Korea and in the tropical equatorial West Pacific. Positive europium anomalies and middle REE enrichments in surface and subsurface waters are indicative of TE input from volcanic islands and fingerprint in detail small-scale equatorial zonal eastward transport of TEs to the iron-limited tropical East Pacific. The low [REE] of North and South Pacific Tropical Waters and Antarctic Intermediate Water are a long-range (i.e., preformed) laterally advected signal, whereas increasing [REE] with depth within North Pacific Intermediate Water result from release from particles. Optimum multiparameter analysis of deep to bottom waters indicates a dominant control of lateral transport and mixing on [REE] at the depth of Lower Circumpolar Deep Water (≥3000 m water depth; ∼75-100% explained by water mass mixing), allowing the northward tracing of LCDW to ∼28°N in the Northwest Pacific. In contrast, scavenging in the hydrothermal plumes of the Lau Basin and Tonga-Fiji area at 1500-2000 m water depth leads to [REE] deficits (∼40-60% removal) and marked REE fractionation in the tropical West Pacific. Overall, our data provide evidence for active trace element input both near South Korea and Papua New Guinea, and for a strong lateral transport component in the distribution of dissolved REEs in large parts of the West Pacific.
Optimum sensitivity derivatives of objective functions in nonlinear programming
NASA Technical Reports Server (NTRS)
Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.
1983-01-01
The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.
Control Design Strategies to Enhance Long-Term Aircraft Structural Integrity
NASA Technical Reports Server (NTRS)
Newman, Brett A.
1999-01-01
Over the operational lifetime of both military and civil aircraft, structural components are exposed to hundreds of thousands of low-stress repetitive load cycles and less frequent but higher-stress transient loads originating from maneuvering flight and atmospheric gusts. Micro-material imperfections in the structure, such as cracks and debonded laminates, expand and grow in this environment, reducing the structural integrity and shortening the life of the airframe. Extreme costs associated with refurbishment of critical load-bearing structural components in a large fleet, or altogether reinventoring the fleet with newer models, indicate alternative solutions for life extension of the airframe structure are highly desirable. Increased levels of operational safety and reliability are also important factors influencing the desirability of such solutions. One area having significant potential for impacting crack growth/fatigue damage reduction and structural life extension is flight control. To modify the airframe response dynamics arising from command inputs and gust disturbances, feedback loops are routinely applied to vehicles. A dexterous flight control system architecture senses key vehicle motions and generates critical forces/moments at multiple points distributed throughout the airframe to elicit the desired motion characteristics. In principle, these same control loops can be utilized to influence the level of exposure to harmful loads during flight on structural components. Project objectives are to investigate and/or assess the leverage control has on reducing fatigue damage and enhancing long-term structural integrity, without degrading attitude control and trajectory guidance performance levels. In particular, efforts have focused on the effects inner loop control parameters and architectures have on fatigue damage rate. To complete this research, an actively controlled flexible aircraft model and a new state space modeling procedure for crack growth have been utilized. Analysis of the analytical state space model for crack growth revealed the critical mathematical factors, and hence the physical mechanism they represent, that influenced high rates of airframe crack growth. The crack model was then exercised with simple load inputs to uncover and expose key crack growth behavior. To characterize crack growth behavior, both "short-term" laboratory specimen test type inputs and "long-term" operational flight type inputs were considered. Harmonic loading with a single overload revealed typical exponential crack growth behavior until the overload application, after which time the crack growth was retarded for a period of time depending on the overload strength. An optimum overload strength was identified which leads to maximum retardation of crack growth. Harmonic loading with a repeated overload of varying strength and frequency again revealed an optimum overload trait for maximizing growth retardation. The optimum overload strength ratio lies near the range of 2 to 3 with dependency on frequency. Experimental data was found to correlate well with the analytical predictions.
A Simulation Model Of A Picture Archival And Communication System
NASA Astrophysics Data System (ADS)
D'Silva, Vijay; Perros, Harry; Stockbridge, Chris
1988-06-01
A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times
Applications of Optimal Building Energy System Selection and Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Stadler, Michael; Siddiqui, Afzal
2011-04-01
Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated bymore » description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.« less
Optimal shield mass distribution for space radiation protection
NASA Technical Reports Server (NTRS)
Billings, M. P.
1972-01-01
Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.
Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwirth, P.N.; Lee, T.J.; Burne, R.V.
1993-03-01
A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less
Conceptual Design and Optimal Power Control Strategy for AN Eco-Friendly Hybrid Vehicle
NASA Astrophysics Data System (ADS)
Nasiri, N. Mir; Chieng, Frederick T. A.
2011-06-01
This paper presents a new concept for a hybrid vehicle using a torque and speed splitting technique. It is implemented by the newly developed controller in combination with a two degree of freedom epicyclic gear transmission. This approach enables optimization of the power split between the less powerful electrical motor and more powerful engine while driving a car load. The power split is fundamentally a dual-energy integration mechanism as it is implemented by using the epicyclic gear transmission that has two inputs and one output for a proper power distribution. The developed power split control system manages the operation of both the inputs to have a known output with the condition of maintaining optimum operating efficiency of the internal combustion engine and electrical motor. This system has a huge potential as it is possible to integrate all the features of hybrid vehicle known to-date such as the regenerative braking system, series hybrid, parallel hybrid, series/parallel hybrid, and even complex hybrid (bidirectional). By using the new power split system it is possible to further reduce fuel consumption and increase overall efficiency.
Design optimum frac jobs using virtual intelligence techniques
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam
2000-10-01
Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These networks will be used as the fitness function for a genetic algorithm routine that will search for the best combination of the design parameters for the frac job. The genetic algorithm will search through the entire solution space and identify the optimal combination of parameters to be used in the design process. Considering the complexity of this task this methodology converges relatively fast, providing the engineer with several near-optimum scenarios for the frac job design. These scenarios, which can be achieved in just a minute or two, can be valuable initial points for the engineer to start his/her design job and save him/her hours of runs on the simulator.
Optimizations of Human Restraint Systems for Short-Period Acceleration
NASA Technical Reports Server (NTRS)
Payne, P. R.
1963-01-01
A restraint system's main function is to restrain its occupant when his vehicle is subjected to acceleration. If the restraint system is rigid and well-fitting (to eliminate slack) then it will transmit the vehicle acceleration to its occupant without modifying it in any way. Few present-day restraint systems are stiff enough to give this one-to-one transmission characteristic, and depending upon their dynamic characteristics and the nature of the vehicle's acceleration-time history, they will either magnify or attenuate the acceleration. Obviously an optimum restraint system will give maximum attenuation of an input acceleration. In the general case of an arbitrary acceleration input, a computer must be used to determine the optimum dynamic characteristics for the restraint system. Analytical solutions can be obtained for certain simple cases, however, and these cases are considered in this paper, after the concept of dynamic models of the human body is introduced. The paper concludes with a description of an analog computer specially developed for the Air Force to handle completely general mechanical restraint optimization programs of this type, where the acceleration input may be any arbitrary function of time.
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.
2017-01-01
Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.
Innovations in Basic Flight Training for the Indonesian Air Force
1990-12-01
microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamrick, Todd
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less
NASA Astrophysics Data System (ADS)
Biswas, G.; Kumari, M.; Adhikari, K.; Dutta, S.
2017-12-01
Fluoride pollution in groundwater is a major concern in rural areas. The flower petal of Shorea robusta, commonly known as sal tree, is used in the present study both in its native form and Ca-impregnated activated form to eradicate excess fluoride from simulated wastewater. Response surface methodology (RSM) was used for experimental designing and analyzing optimum condition for carbonization vis-à-vis calcium impregnation for preparation of adsorbent. During carbonization, temperature, time and weight ratio of calcium chloride to sal flower petal (SFP) have been considered as input factors and percentage removal of fluoride as response. Optimum condition for carbonization has been obtained as temperature, 500 °C; time, 1 h and weight ratio, 2.5 and the sample prepared has been termed as calcium-impregnated carbonized sal flower petal (CCSFP). Optimum condition as analyzed by one-factor-at-a-time (OFAT) method is initial fluoride concentration, 2.91 mg/L; pH 3 and adsorbent dose, 4 g/L. CCSFP shows maximum removal of 98.5% at this condition. RSM has also been used for finding out optimum condition for defluoridation considering initial concentration, pH and adsorbent dose as input parameters. The optimum condition as analyzed by RSM is: initial concentration, 5 mg/L; pH 3.5 and adsorbent dose, 2 g/L. Kinetic and equilibrium data follow Ho pseudo-second-order kinetic model and Freundlich isotherm model, respectively. Adsorption capacity of CCSFP has been found to be 5.465 mg/g. At optimized condition, CCSFP has been found to remove fluoride (80.4%) efficiently from groundwater collected from Bankura district in West Bengal, a fluoride-contaminated province in India.
FISHER'S GEOMETRIC MODEL WITH A MOVING OPTIMUM
Matuszewski, Sebastian; Hermisson, Joachim; Kopp, Michael
2014-01-01
Fisher's geometric model has been widely used to study the effects of pleiotropy and organismic complexity on phenotypic adaptation. Here, we study a version of Fisher's model in which a population adapts to a gradually moving optimum. Key parameters are the rate of environmental change, the dimensionality of phenotype space, and the patterns of mutational and selectional correlations. We focus on the distribution of adaptive substitutions, that is, the multivariate distribution of the phenotypic effects of fixed beneficial mutations. Our main results are based on an “adaptive-walk approximation,” which is checked against individual-based simulations. We find that (1) the distribution of adaptive substitutions is strongly affected by the ecological dynamics and largely depends on a single composite parameter γ, which scales the rate of environmental change by the “adaptive potential” of the population; (2) the distribution of adaptive substitution reflects the shape of the fitness landscape if the environment changes slowly, whereas it mirrors the distribution of new mutations if the environment changes fast; (3) in contrast to classical models of adaptation assuming a constant optimum, with a moving optimum, more complex organisms evolve via larger adaptive steps. PMID:24898080
Optimum fiber distribution in singlewall corrugated fiberboard
Millard W. Johnson; Thomas J. Urbanik; William E. Denniston
1979-01-01
Determining optimum distribution of fiber through rational design of corrugated fiberboard could result in significant reductions in fiber required to meet end-use conditions, with subsequent reductions in price pressure and extension of the softwood timber supply. A theory of thin plates under large deformations is developed that is both kinematically and physically...
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Ramelan, A. H.; Wardoyo, D. T.; Ichsan, S.; Kristiawan, Y. R.
2018-03-01
The utilization and modification of silica from rice straw as the main ingredient of adsorbent has been studied. The aim of this study was to determine the optimum composition of PVA (polyvinyl alcohol): silica to produce adsorbents with excellent pore characteristics, optimum adsorption efficiency and optimum pH for methyl yellow adsorptions. X-Ray Fluorescence (XRF) analysis results showed that straw ash contains 82.12 % of silica (SiO2). SAA (Surface Area Analyzer) analysis showed optimum composition ratio 5:5 of PVA: silica with surface area of 1.503 m2/g. Besides, based on the pore size distribution of PVA: silica (5:5) showed the narrow pore size distribution with the largest pore cumulative volume of 2.8 x 10-3 cc/g. The optimum pH for Methanyl Yellow adsorption is pH 2 with adsorption capacity = 72.1346%.
Towards Rational Decision-Making in Secondary Education.
ERIC Educational Resources Information Center
Cohn, Elchanan
Without a conscious effort to achieve optimum resource allocation, there is a real danger that educational resources may be wasted. This document uses input-output analysis to develop a model for rational decision-making in secondary education. (LLR)
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
Design of helicopter rotor blades for optimum dynamic characteristics
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ko, T.; Korn, A. E.; Rossow, M. P.
1982-01-01
The possibilities and the limitations of tailoring blade mass and stiffness distributions to give an optimum blade design in terms of weight, inertia, and dynamic characteristics are investigated. Changes in mass or stiffness distribution used to place rotor frequencies at desired locations are determined. Theoretical limits to the amount of frequency shift are established. Realistic constraints on blade properties based on weight, mass moment of inertia size, strength, and stability are formulated. The extent hub loads can be minimized by proper choice of EL distribution is determined. Configurations that are simple enough to yield clear, fundamental insights into the structural mechanisms but which are sufficiently complex to result in a realistic result for an optimum rotor blade are emphasized.
Direct connections assist neurons to detect correlation in small amplitude noises
Bolhasani, E.; Azizi, Y.; Valizadeh, A.
2013-01-01
We address a question on the effect of common stochastic inputs on the correlation of the spike trains of two neurons when they are coupled through direct connections. We show that the change in the correlation of small amplitude stochastic inputs can be better detected when the neurons are connected by direct excitatory couplings. Depending on whether intrinsic firing rate of the neurons is identical or slightly different, symmetric or asymmetric connections can increase the sensitivity of the system to the input correlation by changing the mean slope of the correlation transfer function over a given range of input correlation. In either case, there is also an optimum value for synaptic strength which maximizes the sensitivity of the system to the changes in input correlation. PMID:23966940
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Balwinder-Singh; Humphreys, E; Gaydon, D S; Eberbach, P L
2016-10-01
Machinery for sowing wheat directly into rice residues has become more common in the rice-wheat systems of the north-west Indo-Gangetic Plains of South Asia, with increasing numbers of farmers now potentially able to access the benefits of residue retention. However, surface residue retention affects soil water and temperature dynamics, thus the optimum sowing date and irrigation management for a mulched crop may vary from those of a traditional non-mulched crop. Furthermore, the effects of sowing date and irrigation management are likely to vary with soil type and seasonal conditions. Therefore, a simulation study was conducted using the APSIM model and 40 years of weather data to evaluate the effects of mulch, sowing date and irrigation management and their interactions on wheat grain yield, irrigation requirement (I) and water productivity with respect to irrigation (WP I ) and evapotranspiration (WP ET ). The results suggest that the optimum wheat sowing date in central Punjab depends on both soil type and the presence or absence of mulch. On the sandy loam, with irrigation scheduled at 50% soil water deficit (SWD), the optimum sowing date was late October to early November for maximising yield, WP I and WP ET . On the clay loam, the optimum date was about one week later. The effect of mulch on yield varied with seasonal conditions and sowing date. With irrigation at 50% SWD, mulching of wheat sown at the optimum time increased average yield by up to 0.5 t ha -1 . The beneficial effect of mulch on yield increased to averages of 1.2-1.3 t ha -1 as sowing was advanced to 15 October. With irrigation at 50% SWD and 7 November sowing, mulch reduced the number of irrigations by one in almost 50% of years, a reduction of about 50 mm on the sandy loam and 60 mm on the clay loam. The reduction in irrigation amount was mainly due to reduced soil evaporation. Mulch reduced irrigation requirement by more as sowing was delayed, more so on the sandy loam than the clay loam soil. There was little effect of mulch on irrigation requirement for late October sowings. There were large trade-offs between irrigation input, yield, WP ET and WP I on the sandy loam with regard to the optimum irrigation schedule. Maximum yield occurred with very frequent irrigation (10-20% SWD) which also had the greatest irrigation input, while WP I was highest with least frequent irrigation (70% SWD), and WP ET was highest with irrigation at 40-50% SWD. This was the case with and without mulch. On the clay loam, the trade-offs were not so pronounced, as maximum yield was reached with irrigation at 50% SWD, with and without mulch. However, both WP ET and WP I were maximum and irrigation input least at the lowest irrigation frequency (70% SWD). On both soils, maximum yield, WP ET and WP I were higher with mulch, while irrigation input was slightly lower, but mulch had very little effect on the irrigation thresholds at which each parameter was maximised.
The temporal distribution of directional gradients under selection for an optimum.
Chevin, Luis-Miguel; Haller, Benjamin C
2014-12-01
Temporal variation in phenotypic selection is often attributed to environmental change causing movements of the adaptive surface relating traits to fitness, but this connection is rarely established empirically. Fluctuating phenotypic selection can be measured by the variance and autocorrelation of directional selection gradients through time. However, the dynamics of these gradients depend not only on environmental changes altering the fitness surface, but also on evolution of the phenotypic distribution. Therefore, it is unclear to what extent variability in selection gradients can inform us about the underlying drivers of their fluctuations. To investigate this question, we derive the temporal distribution of directional gradients under selection for a phenotypic optimum that is either constant or fluctuates randomly in various ways in a finite population. Our analytical results, combined with population- and individual-based simulations, show that although some characteristic patterns can be distinguished, very different types of change in the optimum (including a constant optimum) can generate similar temporal distributions of selection gradients, making it difficult to infer the processes underlying apparent fluctuating selection. Analyzing changes in phenotype distributions together with changes in selection gradients should prove more useful for inferring the mechanisms underlying estimated fluctuating selection. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad
2016-11-01
In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Design Sensitivity Method for Sampling-Based RBDO with Fixed COV
2015-04-29
contours of the input model at initial design d0 and RBDO optimum design dopt are shown. As the limit state functions are not linear and some input...Glasser, M. L., Moore, R. A., and Scott, T. C., 1990, "Evaluation of Classes of Definite Integrals Involving Elementary Functions via...Differentiation of Special Functions," Applicable Algebra in Engineering, Communication and Computing, 1(2), pp. 149-165. [25] Cho, H., Bae, S., Choi, K. K
Early photosensitizer uptake kinetics predict optimum drug-light interval for photodynamic therapy
NASA Astrophysics Data System (ADS)
Sinha, Lagnojita; Elliott, Jonathan T.; Hasan, Tayyaba; Pogue, Brian W.; Samkoe, Kimberley S.; Tichauer, Kenneth M.
2015-03-01
Photodynamic therapy (PDT) has shown promising results in targeted treatment of cancerous cells by developing localized toxicity with the help of light induced generation of reactive molecular species. The efficiency of this therapy depends on the product of the intensity of light dose and the concentration of photosensitizer (PS) in the region of interest (ROI). On account of this, the dynamic and variable nature of PS delivery and retention depends on many physiological factors that are known to be heterogeneous within and amongst tumors (e.g., blood flow, blood volume, vascular permeability, and lymph drainage rate). This presents a major challenge with respect to how the optimal time and interval of light delivery is chosen, which ideally would be when the concentration of PS molecule is at its maximum in the ROI. In this paper, a predictive algorithm is developed that takes into consideration the variability and dynamic nature of PS distribution in the body on a region-by-region basis and provides an estimate of the optimum time when the PS concentration will be maximum in the ROI. The advantage of the algorithm lies in the fact that it predicts the time in advance as it takes only a sample of initial data points (~12 min) as input. The optimum time calculated using the algorithm estimated a maximum dose that was only 0.58 +/- 1.92% under the true maximum dose compared to a mean dose error of 39.85 +/- 6.45% if a 1 h optimal light deliver time was assumed for patients with different efflux rate constants of the PS, assuming they have the same plasma function. Therefore, if the uptake values of PS for the blood and the ROI is known for only first 12 minutes, the entire curve along with the optimum time of light radiation can be predicted with the help of this algorithm.
MASTOS: Mammography Simulation Tool for design Optimization Studies.
Spyrou, G; Panayiotakis, G; Tzanakos, G
2000-01-01
Mammography is a high quality imaging technique for the detection of breast lesions, which requires dedicated equipment and optimum operation. The design parameters of a mammography unit have to be decided and evaluated before the construction of such a high cost of apparatus. The optimum operational parameters also must be defined well before the real breast examination. MASTOS is a software package, based on Monte Carlo methods, that is designed to be used as a simulation tool in mammography. The input consists of the parameters that have to be specified when using a mammography unit, and also the parameters specifying the shape and composition of the breast phantom. In addition, the input may specify parameters needed in the design of a new mammographic apparatus. The main output of the simulation is a mammographic image and calculations of various factors that describe the image quality. The Monte Carlo simulation code is PC-based and is driven by an outer shell of a graphical user interface. The entire software package is a simulation tool for mammography and can be applied in basic research and/or in training in the fields of medical physics and biomedical engineering as well as in the performance evaluation of new designs of mammography units and in the determination of optimum standards for the operational parameters of a mammography unit.
Harvesting energy from the vibration of a passing train using a single-degree-of-freedom oscillator
NASA Astrophysics Data System (ADS)
Gatti, G.; Brennan, M. J.; Tehrani, M. G.; Thompson, D. J.
2016-01-01
With the advent of wireless sensors, there has been an increasing amount of research in the area of energy harvesting, particularly from vibration, to power these devices. An interesting application is the possibility of harvesting energy from the track-side vibration due to a passing train, as this energy could be used to power remote sensors mounted on the track for strutural health monitoring, for example. This paper describes a fundamental study to determine how much energy could be harvested from a passing train. Using a time history of vertical vibration measured on a sleeper, the optimum mechanical parameters of a linear energy harvesting device are determined. Numerical and analytical investigations are both carried out. It is found that the optimum amount of energy harvested per unit mass is proportional to the product of the square of the input acceleration amplitude and the square of the input duration. For the specific case studied, it was found that the maximum energy that could be harvested per unit mass of the oscillator is about 0.25 J/kg at a frequency of about 17 Hz. The damping ratio for the optimum harvester was found to be about 0.0045, and the corresponding amplitude of the relative displacement of the mass is approximately 5 mm.
Development of a compact cryocooler system for high temperature superconductor filter application
NASA Astrophysics Data System (ADS)
Pang, Xiaomin; Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Hu, Jianying; Dai, Wei; Li, Haibing; Luo, Ercang
2016-12-01
Seeking a higher specific power of the pulse tube cryocooler is an important trend in recent studies. High frequency operation (100 Hz and higher), combined with co-axial configuration, serve as a good option to meet this requirement. This paper introduces a high efficiency co-axial pulse tube cryocooler operating at around 100 Hz. The whole system weighs 4.3 kg (not including the radiator) with a nominal input power of 320 W, namely, power density of the system is around 74 W/kg. The envelop dimensions of the cold finger itself is about 84 mm in length and 23 mm in outer diameter. Firstly, numerical model for designing the system and some simulation results are briefly introduced. Distributions of pressure wave, the phase difference between the pressure wave and the volume flow rate and different energy flow are presented for a better understanding of the system. After this, some of the characterizing experimental results are presented. At an optimum working point, the cooling power at 80 K reaches 16 W with an input electric power of 300 W, which leads to an efficiency of 15.5% of Carnot.
Single optical fiber probe for optogenetics
NASA Astrophysics Data System (ADS)
Falk, Ryan; Habibi, Mohammad; Pashaie, Ramin
2012-03-01
With the advent of optogenetics, all optical control and visualization of the activity of specific cell types is possible. We have developed a fiber optic based probe to control/visualize neuronal activity deep in the brain of awake behaving animals. In this design a thin multimode optical fiber serves as the head of the probe to be inserted into the brain. This fiber is used to deliver excitation/stimulation optical pulses and guide a sample of the emission signal back to a detector. The major trade off in the design of such a system is to decrease the size of the fiber and intensity of input light to minimize physical damage and to avoid photobleaching/phototoxicity but to keep the S/N reasonably high. Here the excitation light, and the associated emission signal, are frequency modulated. Then the output of the detector is passed through a time-lens which compresses the distributed energy of the emission signal and maximizes the instantaneous S/N. By measuring the statistics of the noise, the structure of the time lens can be designed to achieve the global optimum of S/N. Theoretically, the temporal resolution of the system is only limited by the time lens diffraction limit. By adding a second detector, we eliminated the effect of input light fluctuations, imperfection of the optical filters, and back-reflection of the excitation light. We have also designed fibers and micro mechanical assemblies for distributed delivery and detection of light.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Byrne, D. M.; Herman, B. M.; King, M. D.; Spinhirne, J. D.
1980-01-01
A method is presented for inferring both the size distribution and the complex refractive index of atmospheric particulates from combined bistatic-monostatic lidar and solar radiometer observations. The basic input measurements are spectral optical depths at several visible and near-infrared wavelengths as obtained with a solar radiometer and backscatter and angular scatter coefficients as obtained from a biostatic-monostatic lidar. The spectral optical depth measurements obtained from the radiometer are mathematically inverted to infer a columnar particulate size distribution. Advantage is taken of the fact that the shape of the size distribution obtained by inverting the particulate optical depth is relatively insensitive to the particle refractive index assumed in the inversion. Bistatic-monostatic angular scatter and backscatter lidar data are then processed to extract an optimum value for the particle refractive index subject to the constraint that the shape of the particulate size distribution be the same as that inferred from the solar radiometer data. Specifically, the scattering parameters obtained from the bistatic-monostatic lidar data are compared with corresponding theoretical computations made for various assumed refractive index values. That value which yields best agreement, in a weighted least squares sense, is selected as the optimal refractive index estimate. The results of this procedure applied to a set of simulated measurements as well as to measurements collected on two separate days are presented and discussed.
Analysis of the Optimum Receiver Design Problem Using Interactive Computer Graphics.
1981-12-01
7 _AD A115 498A l AR FORCE INST OF TECH WR16HT-PATTERSON AF8 OH SCHOO--ETC F/6 9/2 ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTI...ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTIVE COMPUTER GRAPHICS THESIS AFIT/GE/EE/81D-39 Michael R. Mazzuechi Cpt USA Approved for...public release; distribution unlimited AFIT/GE/EE/SlD-39 ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTIVE COMPUTER GRAPHICS THESIS
How much control is enough? Influence of unreliable input on user experience.
van de Laar, Bram; Plass-Oude Bos, Danny; Reuderink, Boris; Poel, Mannes; Nijholt, Anton
2013-12-01
Brain–computer interfaces (BCI) provide a valuable new input modality within human–computer interaction systems. However, like other body-based inputs such as gesture or gaze based systems, the system recognition of input commands is still far from perfect. This raises important questions, such as what level of control should such an interface be able to provide. What is the relationship between actual and perceived control? And in the case of applications for entertainment in which fun is an important part of user experience, should we even aim for the highest level of control, or is the optimum elsewhere? In this paper, we evaluate whether we can modulate the amount of control and if a game can be fun with less than perfect control. In the experiment users (n = 158) played a simple game in which a hamster has to be guided to the exit of a maze. The amount of control the user has over the hamster is varied. The variation of control through confusion matrices makes it possible to simulate the experience of using a BCI, while using the traditional keyboard for input. After each session the user completed a short questionnaire on user experience and perceived control. Analysis of the data showed that the perceived control of the user could largely be explained by the amount of control in the respective session. As expected, user frustration decreases with increasing control. Moreover, the results indicate that the relation between fun and control is not linear. Although at lower levels of control fun does increase with improved control, the level of fun drops just before perfect control is reached (with an optimum around 96%). This poses new insights for developers of games who want to incorporate some form of BCI or other modality with unreliable input in their game: for creating a fun game, unreliable input can be used to create a challenge for the user.
Matching technique yields optimum LNA performance. [Low Noise Amplifiers
NASA Technical Reports Server (NTRS)
Sifri, J. D.
1986-01-01
The present article is concerned with a case in which an optimum noise figure and unconditional stability have been designed into a 2.385-GHz low-noise preamplifier via an unusual method for matching the input with a suspended line. The results obtained with several conventional line-matching techniques were not satisfactory. Attention is given to the minimization of thermal noise, the design procedure, requirements for a high-impedance line, a sampling of four matching networks, the noise figure of the single-line matching network as a function of frequency, and the approaches used to achieve unconditional stability.
van der Lee, J H; Svrcek, W Y; Young, B R
2008-01-01
Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.
Inputs and spatial distribution patterns of Cr in Jiaozhou Bay
NASA Astrophysics Data System (ADS)
Yang, Dongfang; Miao, Zhenqing; Huang, Xinmin; Wei, Linzhen; Feng, Ming
2018-03-01
Cr pollution in marine bays has been one of the critical environmental issues, and understanding the input and spatial distribution patterns is essential to pollution control. In according to the source strengths of the major pollution sources, the input patterns of pollutants to marine bay include slight, moderate and heavy, and the spatial distribution are corresponding to three block models respectively. This paper analyzed input patterns and distributions of Cr in Jiaozhou Bay, eastern China based on investigation on Cr in surface waters during 1979-1983. Results showed that the input strengths of Cr in Jiaozhou Bay could be classified as moderate input and slight input, and the input strengths were 32.32-112.30 μg L-1 and 4.17-19.76 μg L-1, respectively. The input patterns of Cr included two patterns of moderate input and slight input, and the horizontal distributions could be defined by means of Block Model 2 and Block Model 3, respectively. In case of moderate input pattern via overland runoff, Cr contents were decreasing from the estuaries to the bay mouth, and the distribution pattern was parallel. In case of moderate input pattern via marine current, Cr contents were decreasing from the bay mouth to the bay, and the distribution pattern was parallel to circular. The Block Models were able to reveal the transferring process of various pollutants, and were helpful to understand the distributions of pollutants in marine bay.
Aerodynamic Shape Optimization Using Hybridized Differential Evolution
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2003-01-01
An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.
NASA Astrophysics Data System (ADS)
Luk, K. C.; Ball, J. E.; Sharma, A.
2000-01-01
Artificial neural networks (ANNs), which emulate the parallel distributed processing of the human nervous system, have proven to be very successful in dealing with complicated problems, such as function approximation and pattern recognition. Due to their powerful capability and functionality, ANNs provide an alternative approach for many engineering problems that are difficult to solve by conventional approaches. Rainfall forecasting has been a difficult subject in hydrology due to the complexity of the physical processes involved and the variability of rainfall in space and time. In this study, ANNs were adopted to forecast short-term rainfall for an urban catchment. The ANNs were trained to recognise historical rainfall patterns as recorded from a number of gauges in the study catchment for reproduction of relevant patterns for new rainstorm events. The primary objective of this paper is to investigate the effect of temporal and spatial information on short-term rainfall forecasting. To achieve this aim, a comparison test on the forecast accuracy was made among the ANNs configured with different orders of lag and different numbers of spatial inputs. In developing the ANNs with alternative configurations, the ANNs were trained to an optimal level to achieve good generalisation of data. It was found in this study that the ANNs provided the most accurate predictions when an optimum number of spatial inputs was included into the network, and that the network with lower lag consistently produced better performance.
Optimum performance of hovering rotors
NASA Technical Reports Server (NTRS)
Wu, J. C.; Goorjian, P. M.
1972-01-01
A theory for the optimum performance of a rotor hovering out of ground effect is developed. The performance problem is formulated using general momentum theory for an infinitely bladed rotor, and the effect of a finite number of blades is estimated. The analysis takes advantage of the fact that a simple relation exists between the radial distributions of static pressure and angular velocity in the ultimate wake, far downstream of the rotor, since the radial velocity vanishes there. This relation permits the establishment of an optimum performance criterion in terms of the ultimate wake velocities by introducing a small local perturbation of the rotational velocity and requiring the resulting ratio of thrust and power changes to be independent of the radial location of the perturbation. This analysis fully accounts for the changes in static pressure distribution and axial velocity distribution throughout the wake as the result of the local perturbation of the rotational velocity component.
A 2385 MHz, 2-stage low noise amplifier design
NASA Technical Reports Server (NTRS)
Sifri, Jack D.
1986-01-01
This article shows the design aspects of a 2.385 GHz low noise preamplifier with a .7 dB noise figure and 16.5 dB gain using the NE 67383 FET. The design uses a unique method in matching the input which achieves optimum noise figure and unconditional stability.
ERIC Educational Resources Information Center
Cody, Martin L.
1974-01-01
Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)
A portfolio-based approach to optimize proof-of-concept clinical trials.
Mallinckrodt, Craig; Molenberghs, Geert; Persinger, Charles; Ruberg, Stephen; Sashegyi, Andreas; Lindborg, Stacy
2012-01-01
Improving proof-of-concept (PoC) studies is a primary lever for improving drug development. Since drug development is often done by institutions that work on multiple drugs simultaneously, the present work focused on optimum choices for rates of false positive (α) and false negative (β) results across a portfolio of PoC studies. Simple examples and a newly derived equation provided conceptual understanding of basic principles regarding optimum choices of α and β in PoC trials. In examples that incorporated realistic development costs and constraints, the levels of α and β that maximized the number of approved drugs and portfolio value varied by scenario. Optimum choices were sensitive to the probability the drug was effective and to the proportion of total investment cost prior to establishing PoC. Results of the present investigation agree with previous research in that it is important to assess optimum levels of α and β. However, the present work also highlighted the need to consider cost structure using realistic input parameters relevant to the question of interest.
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
NASA Astrophysics Data System (ADS)
Soyama, H.; Hoshino, J.
2016-04-01
In this paper, we used a Venturi tube for generating hydrodynamic cavitation, and in order to obtain the optimum conditions for this to be used in chemical processes, the relationship between the aggressive intensity of the cavitation and the downstream pressure where the cavitation bubbles collapse was investigated. The acoustic power and the luminescence induced by the bubbles collapsing were investigated under various cavitating conditions, and the relationships between these and the cavitation number, which depends on the upstream pressure, the downstream pressure at the throat of the tube and the vapor pressure of the test water, was found. It was shown that the optimum downstream pressure, i.e., the pressure in the region where the bubbles collapse, increased the aggressive intensity by a factor of about 100 compared to atmospheric pressure without the need to increase the input power. Although the optimum downstream pressure varied with the upstream pressure, the cavitation number giving the optimum conditions was constant for all upstream pressures.
NASA Technical Reports Server (NTRS)
Williams, F. W.; Anderson, M. S.; Kennedy, D.; Butler, R.; Aston, G.
1990-01-01
A computer program which is designed for efficient, accurate buckling and vibration analysis and optimum design of composite panels is described. The capabilities of the program are given along with detailed user instructions. It is written in FORTRAN 77 and is operational on VAX, IBM, and CDC computers and should be readily adapted to others. Several illustrations of the various aspects of the input are given along the example problems illustrating the use and application of the program.
Capacity mapping for optimum utilization of pulverizers for coal fired boilers - article no. 032201
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, C.
2008-09-15
Capacity mapping is a process of comparison of standard inputs with actual fired inputs to assess the available standard output capacity of a pulverizer. The base capacity is a function of grindability; fineness requirement may vary depending on the volatile matter (VM) content of the coal and the input coal size. The quantity and the inlet will change depending on the quality of raw coal and output requirement. It should be sufficient to dry pulverized coal (PC). Drying capacity is also limited by utmost PA fan power to supply air. The PA temperature is limited by air preheater (APH) inletmore » flue gas temperature; an increase in this will result in efficiency loss of the boiler. The higher PA inlet temperature can be attained through the economizer gas bypass, the steam coiled APH, and the partial flue gas recirculation. The PS/coal ratioincreases with a decrease in grindability or pulverizer output and decreases with a decrease in VM. The flammability of mixture has to be monitored on explosion limit. Through calibration, the PA flow and efficiency of conveyance can be verified. The velocities of coal/air mixture to prevent fallout or to avoid erosion in the coal carrier pipe are dependent on the PC particle size distribution. Metal loss of grinding elements inversely depends on the YGP index of coal. Variations of dynamic loading and wearing of grinding elements affect the available milling capacity and percentage rejects. Therefore, capacity mapping in necessary to ensure the available pulverizer capacity to avoid overcapacity or undercapacity running of the pulverizing system, optimizing auxiliary power consumption. This will provide a guideline on the distribution of raw coal feeding in different pulverizers of a boiler to maximize system efficiency and control, resulting in a more cost effective heat rate.« less
Comparative evaluation of distributed-collector solar thermal electric power plants
NASA Technical Reports Server (NTRS)
Fujita, T.; El Gabalawi, N.; Herrera, G. G.; Caputo, R. S.
1978-01-01
Distributed-collector solar thermal-electric power plants are compared by projecting power plant economics of selected systems to the 1990-2000 timeframe. The approach taken is to evaluate the performance of the selected systems under the same weather conditions. Capital and operational costs are estimated for each system. Energy costs are calculated for different plant sizes based on the plant performance and the corresponding capital and maintenance costs. Optimum systems are then determined as the systems with the minimum energy costs for a given load factor. The optimum system is comprised of the best combination of subsystems which give the minimum energy cost for every plant size. Sensitivity analysis is done around the optimum point for various plant parameters.
Yang, Sheng-long; Jin, Shao-fei; Hua, Cheng-jun; Dai, Yang
2015-02-01
In order to analyze the correlation between spatial-temporal distribution of the bigeye tuna ( Thunnus obesus) and subsurface factors, the study explored the isothermal distribution of subsurface temperatures in the bigeye tuna fishing grounds in the tropical Atlantic Ocean, and built up the spatial overlay chart of the isothermal lines of 9, 12, 13 and 15 °C and monthly CPUE (catch per unit effort) from bigeye tuna long-lines. The results showed that the bigeye tuna mainly distributed in the water layer (150-450 m) below the lower boundary depth of thermocline. At the isothermal line of 12 °C, the bigeye tuna mainly lived in the water layer of 190-260 m, while few individuals were found at water depth more than 400 m. As to the 13 °C isothermal line, high CPUE often appeared at water depth less than 250 m, mainly between 150-230 m, while no CPUE appeared at water depth more than 300 m. The optimum range of subsurface factors calculated by frequency analysis and empirical cumulative distribution function (ECDF) exhibited that the optimum depth range of 12 °C isothermal depth was 190-260 m and the 13 °C isothermal depth was 160-240 m, while the optimum depth difference range of 12 °C isothermal depth was -10 to 100 m and the 13 °C isothermal depth was -40 to 60 m. The study explored the optimum range of subsurface factors (water temperature and depth) that drive horizontal and vertical distribution of bigeye tuna. The preliminary result would help to discover the central fishing ground, instruct fishing depth, and provide theoretical and practical references for the longline production and resource management of bigeye tuna in the Atlantic Ocean.
NASA Astrophysics Data System (ADS)
Gusman, Aditya Riadi; Mulia, Iyan E.; Satake, Kenji
2018-01-01
The 2017 Tehuantepec earthquake (
Choosing the Optimum Mix of Duration and Effort in Education.
ERIC Educational Resources Information Center
Oosterbeek, Hessel
1995-01-01
Employs a simple economic model to analyze determinants of Dutch college students' expected study duration and weekly effort. Findings show that the duration/effort ratio is determined by the relative prices of these inputs into the learning process. A higher socioeconomic status increases the duration/effort ratio. Higher ability levels decrease…
Optimum free energy in the reference functional approach for the integral equations theory
NASA Astrophysics Data System (ADS)
Ayadim, A.; Oettel, M.; Amokrane, S.
2009-03-01
We investigate the question of determining the bulk properties of liquids, required as input for practical applications of the density functional theory of inhomogeneous systems, using density functional theory itself. By considering the reference functional approach in the test particle limit, we derive an expression of the bulk free energy that is consistent with the closure of the Ornstein-Zernike equations in which the bridge functions are obtained from the reference system bridge functional. By examining the connection between the free energy functional and the formally exact bulk free energy, we obtain an improved expression of the corresponding non-local term in the standard reference hypernetted chain theory derived by Lado. In this way, we also clarify the meaning of the recently proposed criterion for determining the optimum hard-sphere diameter in the reference system. This leads to a theory in which the sole input is the reference system bridge functional both for the homogeneous system and the inhomogeneous one. The accuracy of this method is illustrated with the standard case of the Lennard-Jones fluid and with a Yukawa fluid with very short range attraction.
NASA Astrophysics Data System (ADS)
Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.
2011-03-01
Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.
Ngadiman, Nor Hasrul Akhmal; Idris, Ani; Irfan, Muhammad; Kurniawan, Denni; Yusof, Noordin Mohd; Nasiri, Rozita
2015-09-01
Maghemite (γ-Fe2O3) nanoparticle with its unique magnetic properties is recently known to enhance the cell growth rate. In this study, γ-Fe2O3 is mixed into polyvinyl alcohol (PVA) matrix and then electrospun to form nanofibers. Design of experiments was used to determine the optimum parameter settings for the electrospinning process so as to produce elctrospun mats with the preferred characteristics such as good morphology, Young's modulus and porosity. The input factors of the electrospinnning process were nanoparticles content (1-5%), voltage (25-35 kV), and flow rate (1-3 ml/h) while the responses considered were Young's modulus and porosity. Empirical models for both responses as a function of the input factors were developed and the optimum input factors setting were determined, and found to be at 5% nanoparticle content, 35 kV voltage, and 1 ml/h volume flow rate. The characteristics and performance of the optimum PVA/γ-Fe2O3 nanofiber mats were compared with those of neat PVA nanofiber mats in terms of morphology, thermal properties, and hydrophilicity. The PVA/γ-Fe2O3 nanofiber mats exhibited higher fiber diameter and surface roughness yet similar thermal properties and hydrophilicity compared to neat PVA PVA/γ-Fe2O3 nanofiber mats. Biocompatibility test by exposing the nanofiber mats with human blood cells was performed. In terms of clotting time, the PVA/γ-Fe2O3 nanofibers exhibited similar behavior with neat PVA. The PVA/γ-Fe2O3 nanofibers also showed higher cells proliferation rate when MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay was done using human skin fibroblast cells. Thus, the PVA/γ-Fe2O3 electrospun nanofibers can be a promising biomaterial for tissue engineering scaffolds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Matching optics for Gaussian beams
NASA Technical Reports Server (NTRS)
Gunter, William D. (Inventor)
1991-01-01
A system of matching optics for Gaussian beams is described. The matching optics system is positioned between a light beam emitter (such as a laser) and the input optics of a second optics system whereby the output from the light beam emitter is converted into an optimum input for the succeeding parts of the second optical system. The matching optics arrangement includes the combination of a light beam emitter, such as a laser with a movable afocal lens pair (telescope) and a single movable lens placed in the laser's output beam. The single movable lens serves as an input to the telescope. If desired, a second lens, which may be fixed, is positioned in the beam before the adjustable lens to serve as an input processor to the movable lens. The system provides the ability to choose waist diameter and position independently and achieve the desired values with two simple adjustments not requiring iteration.
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.
NASA Astrophysics Data System (ADS)
Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.
2018-03-01
This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Optimum design of structures subject to general periodic loads
NASA Technical Reports Server (NTRS)
Reiss, Robert; Qian, B.
1989-01-01
A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.
A significant upward shift in plant species optimum elevation during the 20th century.
Lenoir, J; Gégout, J C; Marquet, P A; de Ruffray, P; Brisse, H
2008-06-27
Spatial fingerprints of climate change on biotic communities are usually associated with changes in the distribution of species at their latitudinal or altitudinal extremes. By comparing the altitudinal distribution of 171 forest plant species between 1905 and 1985 and 1986 and 2005 along the entire elevation range (0 to 2600 meters above sea level) in west Europe, we show that climate warming has resulted in a significant upward shift in species optimum elevation averaging 29 meters per decade. The shift is larger for species restricted to mountain habitats and for grassy species, which are characterized by faster population turnover. Our study shows that climate change affects the spatial core of the distributional range of plant species, in addition to their distributional margins, as previously reported.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
Thermodynamic metrics for measuring the ``sustainability'' of design for recycling
NASA Astrophysics Data System (ADS)
Reuter, Markus; van Schaik, Antoinette
2008-08-01
In this article, exergy is applied as a parameter to measure the “sustainability” of a recycling system in addition to the fundamental prediction of material recycling and energy recovery, summarizing a development of over 20 years by the principal author supported by various co-workers, Ph.D., and M.Sc. students. In order to achieve this, recyclate qualities and particle size distributions throughout the system must be predicted as a function of product design, liberation during shredding, process dynamics, physical separation physics, and metallurgical thermodynamics. This crucial development enables the estimation of the true exergy of a recycling system from its inputs and outputs including all its realistic industrial traits. These models have among others been linked to computer aided design tools of the automotive industry and have been used to evaluate the performance of waste electric and electronic equipment recycling systems in The Netherlands. This paper also suggests that the complete system must be optimized to find a “truer” optimum of the material production system linked to the consumer market.
NASA Astrophysics Data System (ADS)
Cai, Zhonglun; Chen, Peng; Angland, David; Zhang, Xin
2014-03-01
A novel iterative learning control (ILC) algorithm was developed and applied to an active flow control problem. The technique uses pulsed air jets to delay flow separation on a two-element high-lift wing. The ILC algorithm uses position-based pressure measurements to update the actuation. The method was experimentally tested on a wing model in a 0.9 m × 0.6 m low-speed wind tunnel at the University of Southampton. Compressed air and fast switching solenoid valves were used as actuators to excite the flow, and the pressure distribution around the chord of the wing was measured as a feedback control signal for the ILC controller. Experimental results showed that the actuation was able to delay the separation and increase the lift by approximately 10%-15%. By using the ILC algorithm, the controller was able to find the optimum control input and maintain the improvement despite sudden changes of the separation position.
Changing space and sound: Parametric design and variable acoustics
NASA Astrophysics Data System (ADS)
Norton, Christopher William
This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.; Shu, J. Y.
1981-01-01
A subsonic, linearized aerodynamic theory, wing design program for one or two planforms was developed which uses a vortex lattice near field model and a higher order panel method in the far field. The theoretical development of the wake model and its implementation in the vortex lattice design code are summarized and sample results are given. Detailed program usage instructions, sample input and output data, and a program listing are presented in the Appendixes. The far field wake model assumes a wake vortex sheet whose strength varies piecewise linearly in the spanwise direction. From this model analytical expressions for lift coefficient, induced drag coefficient, pitching moment coefficient, and bending moment coefficient were developed. From these relationships a direct optimization scheme is used to determine the optimum wake vorticity distribution for minimum induced drag, subject to constraints on lift, and pitching or bending moment. Integration spanwise yields the bound circulation, which is interpolated in the near field vortex lattice to obtain the design camber surface(s).
NASA Technical Reports Server (NTRS)
Turner, B. J.; Austin, G. L.
1993-01-01
Three-dimensional radar data for three summer Florida storms are used as input to a microwave radiative transfer model. The model simulates microwave brightness observations by a 19-GHz, nadir-pointing, satellite-borne microwave radiometer. The statistical distribution of rainfall rates for the storms studied, and therefore the optimal conversion between microwave brightness temperatures and rainfall rates, was found to be highly sensitive to the spatial resolution at which observations were made. The optimum relation between the two quantities was less sensitive to the details of the vertical profile of precipitation. Rainfall retrievals were made for a range of microwave sensor footprint sizes. From these simulations, spatial sampling-error estimates were made for microwave radiometers over a range of field-of-view sizes. The necessity of matching the spatial resolution of ground truth to radiometer footprint size is emphasized. A strategy for the combined use of raingages, ground-based radar, microwave, and visible-infrared (VIS-IR) satellite sensors is discussed.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
NASA Astrophysics Data System (ADS)
Rahman, Md. Lutfor; Chowdhury, Mehrin; Islam, Nawshad Arslan; Mufti, Sayed Muhammad; Ali, Mohammad
2016-07-01
Pulsating heat pipe (PHP) is a new, promising yet ambiguous technology for effective heat transfer of microelectronic devices where heat is carried by the vapor plugs and liquid slugs of the working fluid. The aim of this research paper is to better understand the operation of PHP through experimental investigations and obtain comparative results for different parameters. A series of experiments are conducted on a closed loop PHP (CLPHP) with 8 loops made of copper capillary tube of 2 mm inner diameter. Ethanol is taken as the working fluid. The operating characteristics are studied for the variation of heat input, filling ratio (FR) and orientation. The filling ratios are 40%, 50%, 60% and 70% based on its total volume. The orientations are 0° (vertical), 30°, 45° and 60°. The results clearly demonstrate the effect of filling ratio and inclination angle on the performance, operational stability and heat transfer capability of ethanol as working fluid of CLPHP. Important insight of the operational characteristics of CLPHP is obtained and optimum performance of CLPHP using ethanol is thus identified. Ethanol works best at 50-60%FR at wide range of heat inputs. At very low heat inputs, 40%FR can be used for attaining a good performance. Filling ratio below 40%FR is not suitable for using in CLPHP as it gives a low performance. The optimum performance of the device can be obtained at vertical position.
Vehicle systems design optimization study
NASA Technical Reports Server (NTRS)
Gilmour, J. L.
1980-01-01
The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.
Adapting radiotherapy to hypoxic tumours
NASA Astrophysics Data System (ADS)
Malinen, Eirik; Søvik, Åste; Hristov, Dimitre; Bruland, Øyvind S.; Rune Olsen, Dag
2006-10-01
In the current work, the concepts of biologically adapted radiotherapy of hypoxic tumours in a framework encompassing functional tumour imaging, tumour control predictions, inverse treatment planning and intensity modulated radiotherapy (IMRT) were presented. Dynamic contrast enhanced magnetic resonance imaging (DCEMRI) of a spontaneous sarcoma in the nasal region of a dog was employed. The tracer concentration in the tumour was assumed related to the oxygen tension and compared to Eppendorf histograph measurements. Based on the pO2-related images derived from the MR analysis, the tumour was divided into four compartments by a segmentation procedure. DICOM structure sets for IMRT planning could be derived thereof. In order to display the possible advantages of non-uniform tumour doses, dose redistribution among the four tumour compartments was introduced. The dose redistribution was constrained by keeping the average dose to the tumour equal to a conventional target dose. The compartmental doses yielding optimum tumour control probability (TCP) were used as input in an inverse planning system, where the planning basis was the pO2-related tumour images from the MR analysis. Uniform (conventional) and non-uniform IMRT plans were scored both physically and biologically. The consequences of random and systematic errors in the compartmental images were evaluated. The normalized frequency distributions of the tracer concentration and the pO2 Eppendorf measurements were not significantly different. 28% of the tumour had, according to the MR analysis, pO2 values of less than 5 mm Hg. The optimum TCP following a non-uniform dose prescription was about four times higher than that following a uniform dose prescription. The non-uniform IMRT dose distribution resulting from the inverse planning gave a three times higher TCP than that of the uniform distribution. The TCP and the dose-based plan quality depended on IMRT parameters defined in the inverse planning procedure (fields and step-and-shoot intensity levels). Simulated random and systematic errors in the pO2-related images reduced the TCP for the non-uniform dose prescription. In conclusion, improved tumour control of hypoxic tumours by dose redistribution may be expected following hypoxia imaging, tumour control predictions, inverse treatment planning and IMRT.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Satake, K.; Mulia, I. E.
2017-12-01
An intraplate normal fault earthquake (Mw 8.2) occurred on 8 September 2017 in the Tehuantepec seismic gap of the Middle America Trench. The submarine earthquake generated a tsunami which was recorded by coastal tide gauges and offshore DART buoys. We used the tsunami waveforms recorded at 16 stations to estimate the fault slip distribution and an optimum sea surface displacement of the earthquake. A steep fault dipping to the northeast with strike of 315°, dip of 73°and rake of -96° based on the USGS W-phase moment tensor solution was assumed for the slip inversion. To independently estimate the sea surface displacement without assuming earthquake fault parameters, we used the B-spline function for the unit sources. The distribution of the unit sources was optimized by a Genetic Algorithm - Pattern Search (GA-PS) method. Tsunami waveform inversion resolves a spatially compact region of large slip (4-10 m) with a dimension of 100 km along the strike and 80 km along the dip in the depth range between 40 km and 110 km. The seismic moment calculated from the fault slip distribution with assumed rigidity of 6 × 1010 Nm-2 is 2.46 × 1021 Nm (Mw 8.2). The optimum displacement model suggests that the sea surface was uplifted up to 0.5 m and subsided down to -0.8 m. The deep location of large fault slip may be the cause of such small sea surface displacements. The simulated tsunami waveforms from the optimum sea surface displacement can reproduce the observations better than those from fault slip distribution; the normalized root mean square misfit for the sea surface displacement is 0.89, while that for the fault slip distribution is 1.04. We simulated the tsunami propagation using the optimum sea surface displacement model. Large tsunami amplitudes up to 2.5 m were predicted to occur inside and around a lagoon located between Salina Cruz and Puerto Chiapas. Figure 1. a) Sea surface displacement for the 2017 Tehuantepec earthquake estimated by tsunami waveforms. b) Map of simulated maximum tsunami amplitude and comparison between observed (blue circles) and simulated (red circles) tsunami maximum amplitude along the coast.
A Decision Support System for Optimum Use of Fertilizers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoskinson, Reed Louis; Hess, John Richard; Fink, Raymond Keith
1999-07-01
The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems’ infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend inmore » the agricultural decision-making process.« less
A Decision Support System for Optimum Use of Fertilizers
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; J. R. Hess; R. K. Fink
1999-07-01
The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems' infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend inmore » the agricultural decision-making process.« less
OPDOT: A computer program for the optimum preliminary design of a transport airplane
NASA Technical Reports Server (NTRS)
Sliwa, S. M.; Arbuckle, P. D.
1980-01-01
A description of a computer program, OPDOT, for the optimal preliminary design of transport aircraft is given. OPDOT utilizes constrained parameter optimization to minimize a performance index (e.g., direct operating cost per block hour) while satisfying operating constraints. The approach in OPDOT uses geometric descriptors as independent design variables. The independent design variables are systematically iterated to find the optimum design. The technical development of the program is provided and a program listing with sample input and output are utilized to illustrate its use in preliminary design. It is not meant to be a user's guide, but rather a description of a useful design tool developed for studying the application of new technologies to transport airplanes.
Studies on Phase Shifting Mechanism in Pulse Tube Cryocooler
NASA Astrophysics Data System (ADS)
Padmanabhan; Gurudath, C. S.; Srikanth, Thota; Ambirajan, A.; Basavaraj, SA; Dinesh, Kumar; Venkatarathnam, G.
2017-02-01
Pulse Tube cryocoolers (PTC) are being used extensively in spacecraft for applications such as sensor cooling due to their simple construction and long life owing to a fully passive cold head. Efforts at ISRO to develop a PTC for space use have resulted in a unit with a cooling capacity of 1W at 80K with an input of 45watts. This paper presents the results of a study with this PTC on the phase shifting characteristics of an Inertance tube in conjunction with a reservoir. The aim was to obtain an optimum phase angle between the mass flow (ṁ) and dynamic pressure (\\tilde p) at the PT cold end that results in the largest possible heat lift from this unit. Theoretical model was developed using Phasor Analysis and Transmission Line Model (TLM) for different mass flow and values of optimum frequency and phase angles were predicted. They were compared with experimental data from the PTC for different configurations of the Inertance tube/reservoir at various frequencies and charge pressures. These studies were carried out to characterise an existing cryocooler and design an optimised phase shifter with the aim of improving the performance with respect to specific power input.
Bar piezoelectric ceramic transformers.
Erhart, Jiří; Pulpan, Půlpán; Rusin, Luboš
2013-07-01
Bar-shaped piezoelectric ceramic transformers (PTs) working in the longitudinal vibration mode (k31 mode) were studied. Two types of the transformer were designed--one with the electrode divided into two segments of different length, and one with the electrodes divided into three symmetrical segments. Parameters of studied transformers such as efficiency, transformation ratio, and input and output impedances were measured. An analytical model was developed for PT parameter calculation for both two- and three-segment PTs. Neither type of bar PT exhibited very high efficiency (maximum 72% for three-segment PT design) at a relatively high transformation ratio (it is 4 for two-segment PT and 2 for three-segment PT at the fundamental resonance mode). The optimum resistive loads were 20 and 10 kΩ for two- and three-segment PT designs for the fundamental resonance, respectively, and about one order of magnitude smaller for the higher overtone (i.e., 2 kΩ and 500 Ω, respectively). The no-load transformation ratio was less than 27 (maximum for two-segment electrode PT design). The optimum input electrode aspect ratios (0.48 for three-segment PT and 0.63 for two-segment PT) were calculated numerically under no-load conditions.
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon
2009-11-09
In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko
2014-01-01
Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.
A Study of Optimum Population Levels—A Progress Report*
Singer, S. Fred
1972-01-01
The purpose of this study is to explore different approaches and to develop a methodology that will allow a calculation of “optimum levels of population.” The discussion is specialized to the United States, but the methodology should be broad enough to handle other countries, including less-developed countries. The study is based on economics, but with major inputs from the areas of technology, natural resources management, environmental effects, and demography. The general approach will be to develop an index for quality of life (IQL or Q-index) and to maximize this index as a function of level and distribution of population. The technique consists of a reshuffling of national income accounts so as to be able to go from the Gross National Product (GNP) to the index for quality of life, plus a careful discussion of what is and what is not to be included. The initial part of the study consists of a projection of the index for quality of life as population level increases and as population distribution changes, under the assumption of various technologies, particularly as these relate to the consumption of minerals, energy, and other natural resources. One would expect that as economic growth continues, an increasing fraction of expenditures would be for the diseconomics produced by population growth and economic growth. This study should be useful by providing a rational base for governmental policies regarding population, both in the United States and abroad. Another application of the study is to technology assessment, by measurement of the impact on economic well-being through the introduction of new technologies. Therefore, one can gauge the necessary and desirable investments in certain new technologies. In general, mathematical models resulting from this study can become useful diagnostic tools to analyze the consequences of various public and private policy decisions. PMID:4509346
Yang, Sheng Long; Wu, Yu Mei; Zhang, Bian Bian; Zhang, Yu; Fan, Wei; Jin, Shao Fei; Dai, Yang
2017-01-01
A thermocline characteristics contour on a spatial overlay map was plotted using data collected on a monthly basis from Argo buoys and data of monthly CPUE (catch per unit effort) bigeye tuna (Thunnus obesus) long-lines fishery from the Western and Central Pacific Fisheries Commission (WCPFC) to evaluate the relationship between fishing grounds temporal-spatial distribution of bigeye tuna and thermocline characteristics in the Western and Central Pacific Ocean (WCPO). In addition, Numerical methods were used to calculate the optimum ranges of thermocline characteristics of the central fishing grounds. The results showed that the central fishing grounds were mainly distributed between 10° N and 10° S. Seasonal fishing grounds in the south of equator were related to the seasonal variations in the upper boundary temperature, depth and thickness of thermocline. The fishing grounds were observed in areas where the upper boundary depth of thermocline was deep (70-100 m) and the thermocline thickness was more than 60 m. The CPUE tended to be low in area where the thermocline thickness was less than 40 m. The optimum upper boundary temperature range for distribution was 26-29 ℃, and the CPUE was mostly lower than the threshold value (Q3) of central fishing grounds when the temperature was higher than 29 ℃ or lower than 26 ℃. The temporal and spatial distribution of the fishing grounds was influenced by the seasonal variations in upper boundary depth and thermocline thickness. The central fishing grounds in the south of equator disappeared when the upper boundary depth of thermocline decreased and thermocline thickness became thinner. The lower boundary temperature and depth of thermocline and thermocline strength has little variation, but were strongly linked to the location of fishing grounds. The fishing grounds were mainly located between the two high-value zones of the lower boundary depth of thermocline, where the temperature was lower than 13 ℃ and the strength was high. When the depth was more than 300 m or less than 150 m, the lower boundary temperature was more than 17 ℃, or the strength was low, the CPUE tended to be low. The optimum range of thermocline characteristics was calculated using frequency analysis and empirical cumulative distribution function. The results showed that the optimum ranges for upper boundary thermocline temperature and depth were 26-29 ℃ and 70-110 m, the optimum lower boundary thermocline temperature and depth ranges were 11-13 ℃ and 200-280 m, the optimum ranges for thermocline thickness and thermocline strength were 50-90 m and 0.1-0.16 ℃·m -1 , respectively. The paper documented the distribution interval of thermocline characteristics for central fishing ground of the bigeye tuna in WCPO. The results provided a reference for improving the efficiency of pelagic bigeye tuna fishing operation and tuna resource management in WCPO.
NASA Technical Reports Server (NTRS)
On, F. J.
1975-01-01
Test methods were evaluated to ascertain whether a spacecraft, properly tested within its shroud, could be vibroacoustic tested without the shroud, with adjustments made in the acoustic input spectra to simulate the acoustic response of the missing shroud. The evaluation was based on vibroacoustic test results obtained from a baseline model composed (1) of a spacecraft with adapter, lower support structure, and shroud; (2) of the spacecraft, adapter, and lower structure, but without the shroud; and (3) of the spacecraft and adapter only. Emphasis was placed on the magnitude of the acoustic input changes required to substitute for the shroud and the difficulty of making such input changes, and the degree of missimulation which can result from the performance of a particular, less-than optimum test. Conclusions are drawn on the advantages and disadvantages derived from the use of input spectra adjustment methods and lower support structure simulations. Test guidelines were also developed for planning and performing a launch acoustic-environmental test.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Performance Analysis and Optimization of Concentrating Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Lamba, Ravita; Manikandan, S.; Kaushik, S. C.
2018-06-01
A thermodynamic model for a concentrating solar thermoelectric generator considering the Thomson effect combined with Fourier heat conduction, Peltier, and Joule heating has been developed and optimized in MATLAB environment. The temperatures at the hot and cold junctions of the thermoelectric generator were evaluated by solving the energy balance equations at both junctions. The effects of the solar concentration ratio, input electrical current, number of thermocouples, and electrical load resistance ratio on the power output and energy and exergy efficiencies of the system were studied. Optimization studies were carried out for the STEG system, and the optimum number of thermocouples, concentration ratio, and resistance ratio determined. The results showed that the optimum values of these parameters are different for conditions of maximum power output and maximum energy and exergy efficiency. The optimum values of the concentration ratio and load resistance ratio for maximum energy efficiency of 5.85% and maximum exergy efficiency of 6.29% were found to be 180 and 1.3, respectively, with corresponding power output of 4.213 W. Furthermore, at higher concentration ratio (C = 600), the optimum number of thermocouples was found to be 101 for maximum power output of 13.75 W, maximum energy efficiency of 5.73%, and maximum exergy efficiency of 6.16%. Moreover, the optimum number of thermocouple was the same for conditions of maximum power output and energy and exergy efficiency. The results of this study may provide insight for design of actual concentrated solar thermoelectric generator systems.
Using Extreme Groups Strategy When Measures Are Not Normally Distributed.
ERIC Educational Resources Information Center
Fowler, Robert L.
1992-01-01
A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Defense Logistics Standard Systems Functional Requirements.
1987-03-01
Artificial Intelligence - the development of a machine capability to perform functions normally concerned with human intelligence, such as learning , adapting...Basic Data Base Machine Configurations .... ......... D- 18 xx ~ ?f~~~vX PART I: MODELS - DEFENSE LOGISTICS STANDARD SYSTEMS FUNCTIONAL REQUIREMENTS...On-line, Interactive Access. Integrating user input and machine output in a dynamic, real-time, give-and- take process is considered the optimum mode
Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe
1989-01-01
PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...
Kuwawenaruwa, August; Borghi, Josephine; Remme, Michelle; Mtei, Gemini
2017-07-11
There is limited evidence on how health care inputs are distributed from the sub-national level down to health facilities and their potential influence on promoting health equity. To address this gap, this paper assesses equity in the distribution of health care inputs across public primary health facilities at the district level in Tanzania. This is a quantitative assessment of equity in the distribution of health care inputs (staff, drugs, medical supplies and equipment) from district to facility level. The study was carried out in three districts (Kinondoni, Singida Rural and Manyoni district) in Tanzania. These districts were selected because they were implementing primary care reforms. We administered 729 exit surveys with patients seeking out-patient care; and health facility surveys at 69 facilities in early 2014. A total of seventeen indices of input availability were constructed with the collected data. The distribution of inputs was considered in relation to (i) the wealth of patients accessing the facilities, which was taken as a proxy for the wealth of the population in the catchment area; and (ii) facility distance from the district headquarters. We assessed equity in the distribution of inputs through the use of equity ratios, concentration indices and curves. We found a significant pro-rich distribution of clinical staff and nurses per 1000 population. Facilities with the poorest patients (most remote facilities) have fewer staff per 1000 population than those with the least poor patients (least remote facilities): 0.6 staff per 1000 among the poorest, compared to 0.9 among the least poor; 0.7 staff per 1000 among the most remote facilities compared to 0.9 among the least remote. The negative concentration index for support staff suggests a pro-poor distribution of this cadre but the 45 degree dominated the concentration curve. The distribution of vaccines, antibiotics, anti-diarrhoeal, anti-malarials and medical supplies was approximately proportional (non dominance), whereas the distribution of oxytocics, anti-retroviral therapy (ART) and anti-hypertensive drugs was pro-rich, with the 45 degree line dominating the concentration curve for ART. This study has shown there are inequities in the distribution of health care inputs across public primary care facilities. This highlights the need to ensure a better coordinated and equitable distribution of inputs through regular monitoring of the availability of health care inputs and strengthening of reporting systems.
NASA Technical Reports Server (NTRS)
Evans, Austin Lewis
1987-01-01
A computer code to model the steady-state performance of a monogroove heat pipe for the NASA Space Station is presented, including the effects on heat pipe performance of a screen in the evaporator section which deals with transient surges in the heat input. Errors in a previous code have been corrected, and the new code adds additional loss terms in order to model several different working fluids. Good agreement with existing performance curves is obtained. From a preliminary evaluation of several of the radiator design parameters it is found that an optimum fin width could be achieved but that structural considerations limit the thickness of the fin to a value above optimum.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2017-08-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
NASA Astrophysics Data System (ADS)
Raudah; Zulkifli
2018-03-01
The present research focuses on establishing the optimum conditions in converting coffee husk into a densified biomass fuel using starch as a binding agent. A Response Surface Methodology (RSM) approach using Box-Behnken experimental design with three levels (-1, 0, and +1) was employed to obtain the optimum level for each parameter. The briquettes wereproduced by compressing the mixture of coffee husk-starch in a piston and die assembly with the pressure of 2000 psi. Furthermore, starch percentage, pyrolysis time, and particle size were the input parameters for the algorithm. Bomb calorimeter was used to determine the heating value (HHV) of the solid fuel. The result of the study indicated that a combination of 34.71 mesh particle size, 110.93 min pyrolysis time, and 8% starch concentration werethe optimum variables.The HHV and density of the fuel were up to 5644.66 calgr-1 and 0.7069 grcm-3,respectively. The study showed that further research should be conducted to improve the briquette density therefore the coffee husk could be convert into commercialsolid fuel to replace the dependent on fossil fuel.
Development a computer codes to couple PWR-GALE output and PC-CREAM input
NASA Astrophysics Data System (ADS)
Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.
2018-02-01
Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.
Sensory-evoked perturbations of locomotor activity by sparse sensory input: a computational study
Brownstone, Robert M.
2015-01-01
Sensory inputs from muscle, cutaneous, and joint afferents project to the spinal cord, where they are able to affect ongoing locomotor activity. Activation of sensory input can initiate or prolong bouts of locomotor activity depending on the identity of the sensory afferent activated and the timing of the activation within the locomotor cycle. However, the mechanisms by which afferent activity modifies locomotor rhythm and the distribution of sensory afferents to the spinal locomotor networks have not been determined. Considering the many sources of sensory inputs to the spinal cord, determining this distribution would provide insights into how sensory inputs are integrated to adjust ongoing locomotor activity. We asked whether a sparsely distributed set of sensory inputs could modify ongoing locomotor activity. To address this question, several computational models of locomotor central pattern generators (CPGs) that were mechanistically diverse and generated locomotor-like rhythmic activity were developed. We show that sensory inputs restricted to a small subset of the network neurons can perturb locomotor activity in the same manner as seen experimentally. Furthermore, we show that an architecture with sparse sensory input improves the capacity to gate sensory information by selectively modulating sensory channels. These data demonstrate that sensory input to rhythm-generating networks need not be extensively distributed. PMID:25673740
Treatment planning for internal emitter therapy: Methods, applications and clinical implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sgouros, G.
1999-01-01
Treatment planning involves three basic steps: (1) a procedure must be devised that will provide the most relevant information, (2) the procedure must be applied and (3) the resulting information must be translated into a definition of the optimum implementation. There are varying degrees of treatment planning that may be implemented in internal emitter therapy. As in chemotherapy, the information from a Phase 1 study may be used to treat patients based upon body surface area. If treatment planning is included on a patient-specific basis, a pretherapy, trace-labeled, administration of the radiopharmaceutical is generally required. The data collected following themore » tracer dose may range from time-activity curves of blood and whole-body for use in blood, marrow or total body absorbed dose estimation to patient imaging for three-dimensional internal emitter dosimetry. The most ambitious approach requires a three-dimensional set of images representing radionuclide distribution (SPECT or PET) and a corresponding set of images representing anatomy (CT or MRI). The absorbed dose (or dose-rate) distribution may be obtained by convolution of a point kernel with the radioactivity distribution or by direct Monte Carlo calculation. A critical requirement for both techniques is the development of an overall structure that makes it possible, in a routine manner, to input the images, to identify the structures of interest and to display the results of the dose calculations in a clinically relevant manner. 52 refs., 4 figs., 1 tab.« less
Beam shaping to provide round and square-shaped beams in optical systems of high-power lasers
NASA Astrophysics Data System (ADS)
Laskin, Alexander; Laskin, Vadim
2016-05-01
Optical systems of modern high-power lasers require control of irradiance distribution: round or square-shaped flat-top or super-Gaussian irradiance profiles are optimum for amplification in MOPA lasers and for thermal load management while pumping of crystals of solid-state ultra-short pulse lasers to control heat and minimize its impact on the laser power and beam quality while maximizing overall laser efficiency, variable profiles are also important in irradiating of photocathode of Free Electron lasers (FEL). It is suggested to solve the task of irradiance re-distribution using field mapping refractive beam shapers like piShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flat-top one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with ultra-short pulse lasers having broad spectrum. Using the same piShaper device it is possible to realize beams with flat-top, inverse Gauss or super Gauss irradiance distribution by simple variation of input beam diameter, and the beam shape can be round or square with soft edges. This paper will describe some design basics of refractive beam shapers of the field mapping type and optical layouts of their applying in optical systems of high-power lasers. Examples of real implementations and experimental results will be presented as well.
NASA Astrophysics Data System (ADS)
Sekine, Hideki; Yoshida, Kimiaki
This paper deals with the optimization problem of material composition for minimizing the stress intensity factor of radial edge crack in thick-walled functionally graded material (FGM) circular pipes under steady-state thermomechanical loading. Homogenizing the FGM circular pipes by simulating the inhomogeneity of thermal conductivity by a distribution of equivalent eigentemperature gradient and the inhomogeneity of Young's modulus and Poisson's ratio by a distribution of equivalent eigenstrain, we present an approximation method to obtain the stress intensity factor of radial edge crack in the FGM circular pipes. The optimum material composition for minimizing the stress intensity factor of radial edge crack is determined using a nonlinear mathematical programming method. Numerical results obtained for a thick-walled TiC/Al2O3 FGM circular pipe reveal that it is possible to decrease remarkably the stress intensity factor of radial edge crack by setting the optimum material composition profile.
Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network
NASA Astrophysics Data System (ADS)
Khan, U. T.
2016-12-01
Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.
The practical operational-amplifier gyrator circuit for inductorless filter synthesis
NASA Technical Reports Server (NTRS)
Sutherland, W. C.
1976-01-01
A literature is reported for gyrator circuits utilizing operational amplifiers as the active device. A gyrator is a two port nonreciprocal device with the property that the input impedance is proportional to the reciprocal of the load impedance. Following an experimental study, the gyrator circuit with optimum properties was selected for additional testing. A theoretical analysis was performed and compared to the experimental results for excellent agreement.
Thermoelectric thin film thermal coating systems
NASA Technical Reports Server (NTRS)
Harpster, J. W.; Bulman, W. E.; Middleton, A. E.; Swinehart, P. R.; Braun, F. D.
1973-01-01
Derivation of the fluid loop temperature profile for a model with thermoelectric devices (TED) attached is developed as a function of position, incident radiation intensity, input fluid loop temperature and TED current. The associated temperature of the radiator is also developed so that the temperature difference across the TED can be determined for each position. The temperature difference is used in determining optimum operating conditions and available generated electrical power.
Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko
2015-01-01
Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381
Time-optimum packet scheduling for many-to-one routing in wireless sensor networks
Song, W.-Z.; Yuan, F.; LaHuser, R.
2007-01-01
This paper studies the WSN application scenario with periodical traffic from all sensors to a sink. We present a time-optimum and energy-efficient packet scheduling algorithm and its distributed implementation. We first give a general many-to-one packet scheduling algorithm for wireless networks, and then prove that it is time-optimum and costs max(2N(u1) - 1, N(u 0) -1) time slots, assuming each node reports one unit of data in each round. Here N(u0) is the total number of sensors, while N(u 1) denotes the number of sensors in a sink's largest branch subtree. With a few adjustments, we then show that our algorithm also achieves time-optimum scheduling in heterogeneous scenarios, where each sensor reports a heterogeneous amount of data in each round. Then we give a distributed implementation to let each node calculate its duty-cycle locally and maximize efficiency globally. In this packet scheduling algorithm, each node goes to sleep whenever it is not transceiving, so that the energy waste of idle listening is also eliminated. Finally, simulations are conducted to evaluate network performance using the Qualnet simulator. Among other contributions, our study also identifies the maximum reporting frequency that a deployed sensor network can handle. ??2006 IEEE.
Time-optimum packet scheduling for many-to-one routing in wireless sensor networks
Song, W.-Z.; Yuan, F.; LaHusen, R.; Shirazi, B.
2007-01-01
This paper studies the wireless sensor networks (WSN) application scenario with periodical traffic from all sensors to a sink. We present a time-optimum and energy-efficient packet scheduling algorithm and its distributed implementation. We first give a general many-to-one packet scheduling algorithm for wireless networks, and then prove that it is time-optimum and costs [image omitted], N(u0)-1) time slots, assuming each node reports one unit of data in each round. Here [image omitted] is the total number of sensors, while [image omitted] denotes the number of sensors in a sink's largest branch subtree. With a few adjustments, we then show that our algorithm also achieves time-optimum scheduling in heterogeneous scenarios, where each sensor reports a heterogeneous amount of data in each round. Then we give a distributed implementation to let each node calculate its duty-cycle locally and maximize efficiency globally. In this packet-scheduling algorithm, each node goes to sleep whenever it is not transceiving, so that the energy waste of idle listening is also mitigated. Finally, simulations are conducted to evaluate network performance using the Qualnet simulator. Among other contributions, our study also identifies the maximum reporting frequency that a deployed sensor network can handle.
NASA Technical Reports Server (NTRS)
Rice, E. J.
1976-01-01
A liner design for noise suppressors with outer wall treatment such as in an engine inlet is presented which potentially circumvents the problems of resolution in modal measurement. The method is based on the fact that the modal optimum impedance and the maximum possible sound power attenuation at this optimum can be expressed as functions of cutoff ratio alone. Modes with similar cutoff ratios propagate similarly in the duct and in addition propagate similarly to the far field. Thus there is no need to determine the acoustic power carried by these modes individually, and they can be grouped together as one entity. With the optimum impedance and maximum attenuation specified as functions of cutoff ratio, the off-optimum liner performance can be estimated using an approximate attenuation equation.
High-speed reference-beam-angle control technique for holographic memory drive
NASA Astrophysics Data System (ADS)
Yamada, Ken-ichiro; Ogata, Takeshi; Hosaka, Makoto; Fujita, Koji; Okuyama, Atsushi
2016-09-01
We developed a holographic memory drive for next-generation optical memory. In this study, we present the key technology for achieving a high-speed transfer rate for reproduction, that is, a high-speed control technique for the reference beam angle. In reproduction in a holographic memory drive, there is the issue that the optimum reference beam angle during reproduction varies owing to distortion of the medium. The distortion is caused by, for example, temperature variation, beam irradiation, and moisture absorption. Therefore, a reference-beam-angle control technique to position the reference beam at the optimum angle is crucial. We developed a new optical system that generates an angle-error-signal to detect the optimum reference beam angle. To achieve the high-speed control technique using the new optical system, we developed a new control technique called adaptive final-state control (AFSC) that adds a second control input to the first one derived from conventional final-state control (FSC) at the time of angle-error-signal detection. We established an actual experimental system employing AFSC to achieve moving control between each page (Page Seek) within 300 µs. In sequential multiple Page Seeks, we were able to realize positioning to the optimum angles of the reference beam that maximize the diffracted beam intensity. We expect that applying the new control technique to the holographic memory drive will enable a giga-bit/s-class transfer rate.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
Combined linear theory/impact theory method for analysis and design of high speed configurations
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1980-01-01
Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.
Optimal Design of a Thermoelectric Cooling/Heating System for Car Seat Climate Control (CSCC)
NASA Astrophysics Data System (ADS)
Elarusi, Abdulmunaem; Attar, Alaa; Lee, Hosung
2017-04-01
In the present work, the optimum design of thermoelectric car seat climate control (CSCC) is studied analytically in an attempt to achieve high system efficiency. Optimal design of a thermoelectric device (element length, cross-section area and number of thermocouples) is carried out using our newly developed optimization method based on the ideal thermoelectric equations and dimensional analysis to improve the performance of the thermoelectric device in terms of the heating/cooling power and the coefficient of performance (COP). Then, a new innovative system design is introduced which also includes the optimum input current for the initial (transient) startup warming and cooling before the car heating ventilation and air conditioner (HVAC) is active in the cabin. The air-to-air heat exchanger's configuration was taken into account to investigate the optimal design of the CSCC.
A Space-Saving Approximation Algorithm for Grammar-Based Compression
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroshi; Maruyama, Shirou; Kida, Takuya; Shimozono, Shinichi
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and g=\\\\Omega(\\\\log n) and g=O\\\\left(\\\\frac{n}{log_kn}\\\\right) for strings from k-letter alphabet[12].
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Computerized LCC/ORLA methodology. [Life cycle cost/optimum repair level analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, J.T.
1979-01-01
The effort by Sandia Laboratories in developing CDC6600 computer programs for Optimum Repair Level Analysis (ORLA) and Life Cycle Cost (LCC) analysis is described. Investigation of the three repair-level strategies referenced in AFLCM/AFSCM 800-4 (base discard of subassemblies, base repair of subassemblies, and depot repair of subassemblies) was expanded to include an additional three repair-level strategies (base discard of complete assemblies and, upon shipment of complete assemblies to the depot, depot repair of assemblies by subassembly repair, and depot repair of assemblies by subassembly discard). The expanded ORLA was used directly in an LCC model that was procedurally altered tomore » accommodate the ORLA input data. Available from the LCC computer run was an LCC value corresponding to the strategy chosen from the ORLA. 2 figures.« less
Numerical research of a 2D axial symmetry hybrid model for the radio-frequency ion thruster
NASA Astrophysics Data System (ADS)
Chenchen, WU; Xinfeng, SUN; Zuo, GU; Yanhui, JIA
2018-04-01
Since the high efficiency discharge is critical to the radio-frequency ion thruster (RIT), a 2D axial symmetry hybrid model has been developed to study the plasma evolution of RIT. The fluid method and the drift energy correction of the electron energy distribution function (EEDF) are applied to the analysis of the RIT discharge. In the meantime, the PIC-MCC method is used to investigate the ion beam current extraction character for the plasma plume region. The beam current simulation results, with the hybrid model, agree well with the experimental results, and the error is lower than 11%, which shows the validity of the model. The further study shows there is an optimal ratio for the radio-frequency (RF) power and the beam current extraction power under the fixed RIT configuration. And the beam extraction efficiency will decrease when the discharge efficiency beyond a certain threshold (about 87 W). As the input parameters of the hybrid model are all the design values, it can be directly used to the optimum design for other kinds of RITs and radio-frequency ion sources.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
NASA Technical Reports Server (NTRS)
Ramins, P.
1984-01-01
Computer designed axisymmetric 2.4-cm-diameter three-, four-, and five-stage depressed collectors were evaluated in conjunction with an octave bandwidth, high-perveance, and high-electronic-efficiency, griddled-gun traveling wave tube (TWT). Spent-beam refocusing was used to condition the beam for optimum entry into the depressed collectors. Both the TWT and multistage depressed collector (MDC) efficiencies were measured, as well as the MDC current, dissipated thermal power, and DC input power distributions, for the TWT operating both at saturation over its bandwidth and over its full dynamic range. Relatively high collector efficiencies were obtained, leading to a very substantial improvement in the overall TWT efficiency. In spite of large fixed TWT body losses (due largely to the 6 to 8 percent beam interception), average overall efficiencies of 45 to 47 percent (for three to five collector stages) were obtained at saturation across the 2.5-, to 5.5-GHz operating band. For operation below saturation the collector efficiencies improved steadily, leading to reasonable ( 20 percent) overall efficiencies as far as 6 dB below saturation.
NASA Astrophysics Data System (ADS)
Ozen, Murat; Guler, Murat
2014-02-01
Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.
Characteristic of a Digital Correlation Radiometer Back End with Finite Wordlength
NASA Technical Reports Server (NTRS)
Biswas, Sayak K.; Hyde, David W.; James, Mark W.; Cecil, Daniel J.
2017-01-01
The performance characteristic of a digital correlation radiometer signal processing back end (DBE) is analyzed using a simulator. The particular design studied here corresponds to the airborne Hurricane Imaging radiometer which was jointly developed by the NASA Marshall Space Flight Center, University of Michigan, University of Central Florida and NOAA. Laboratory and flight test data is found to be in accord with the simulation results. Overall design seems to be optimum for the typical input signal dynamic range. It was found that the performance of the digital kurtosis could be improved by lowering the DBE input power level. An unusual scaling between digital correlation channels observed in the instrument data is confirmed to be a DBE characteristic.
Power supply standardization and optimization study
NASA Technical Reports Server (NTRS)
Ware, C. L.; Ragusa, E. V.
1972-01-01
A comprehensive design study of a power supply for use in the space shuttle and other space flight applications is presented. The design specifications are established for a power supply capable of supplying over 90 percent of the anticipated voltage requirements for future spacecraft avionics systems. Analyses and tradeoff studies were performed on several alternative design approaches to assure that the selected design would provide near optimum performance of the planned applications. The selected design uses a dc-to-dc converter incorporating regenerative current feedback with a time-ratio controlled duty cycle to achieve high efficiency over a wide variation in input voltage and output loads. The packaging concept uses an expandable mainframe capable of accommodating up to six inverter/regulator modules with one common input filter module.
Advanced infrared laser modulator development
NASA Technical Reports Server (NTRS)
Cheo, P. K.; Wagner, R.; Gilden, M.
1984-01-01
A parametric study was conducted to develop an electrooptic waveguide modulator for generating continuous tunable sideband power from an infrared CO2 laser. Parameters included were the waveguide configurations, microstrip dimensions device impedance, and effective dielectric constants. An optimum infrared laser modulator was established and was fabricated. This modulator represents the state-of-the-art integrated optical device, which has a three-dimensional topology to accommodate three lambda/4 step transformers for microwave impedance matching at both the input and output terminals. A flat frequency response of the device over 20 HGz or = 3 dB) was achieved. Maximum single sideband to carrier power greater than 1.2% for 20 W microwave input power at optical carrier wavelength of 10.6 microns was obtained.
Momentum distributions for H 2 ( e , e ' p )
Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.
2014-12-29
[Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less
NASA Astrophysics Data System (ADS)
Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.
2017-12-01
Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.
Preparation of A356 Foam Aluminum by Means of Titanium Hydride
NASA Astrophysics Data System (ADS)
Sarajan, Zohair
2017-09-01
The effect of heating temperature and stirring time during preparation of foam aluminum alloy A356 on its relative porosity is studied. The optimum amount of the foam-forming agent, i.e., titanium hydride TiH2, facilitating uniform distribution of pores throughout the whole cross section of a hardened casting is determined. Optimum conditions are established for foam formation in a melt during stirring using a mixer are described.
Designing Small Propellers for Optimum Efficiency and Low Noise Footprint
2015-06-26
each one. The GUI contains input boxes for all of the necessary data in order to run QMIL, QPROP, NAFNoise, and to produce Visual Basic ( VBA ) code... VBA macros that will automatically place reference planes for each airfoil section and insert the splined airfoils to their respective reference...Figure 24. Solid propeller exa mple. Figure 25. Hub and spoke propeller design. Figure 26. Alumninum hub design. accessed on May 12, 2015. DC, August
2007-09-01
performance of the detector, and to compare the performance with sodium iodide and germanium detectors. Monte Carlo ( MCNP ) simulation was used to...aluminum ~50% more efficient), and to estimate optimum shield dimensions for an HPXe based nuclear explosion monitor. MCNP modeling was also used to...detector were calculated with MCNP by using input activity levels as measured in routine NEM runs at Pacific Northwest National Laboratory (PNNL
A New ’Availability-Payment’ Model for Pricing Performance-Based Logistics Contracts
2014-04-30
maintenance network connected to the inventory and Original Equipment Manufacturer (OEM) used in this paper. The input to the Petri net in Figure 2 is the...contract structures. The model developed in this paper uses an affine controller to drive a discrete event simulator ( Petri net ) that produces...discrete event simulator ( Petri net ) that produces availability and cost measures. The model is used to explore the optimum availability assessment
Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting
NASA Astrophysics Data System (ADS)
Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy
2018-06-01
Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.
Ultrasonic sludge pretreatment under pressure.
Le, Ngoc Tuan; Julcour-Lebigue, Carine; Delmas, Henri
2013-09-01
The objective of this work was to optimize the ultrasound (US) pretreatment of sludge. Three types of sewage sludge were examined: mixed, secondary and secondary after partial methanisation ("digested" sludge). Thereby, several main process parameters were varied separately or simultaneously: stirrer speed, total solid content of sludge (TS), thermal operating conditions (adiabatic vs. isothermal), ultrasonic power input (PUS), specific energy input (ES), and for the first time external pressure. This parametric study was mainly performed for the mixed sludge. Five different TS concentrations of sludge (12-36 g/L) were tested for different values of ES (7000-75,000 kJ/kgTS) and 28 g/L was found as the optimum value according to the solubilized chemical oxygen demand in the liquid phase (SCOD). PUS of 75-150 W was investigated under controlled temperature and the "high power input - short duration" procedure was the most effective at a given ES. The temperature increase in adiabatic US application significantly improved SCOD compared to isothermal conditions. With PUS of 150 W, the effect of external pressure was investigated in the range of 1-16 bar under isothermal and adiabatic conditions for two types of sludge: an optimum pressure of about 2 bar was found regardless of temperature conditions and ES values. Under isothermal conditions, the resulting improvement of sludge disintegration efficacy as compared to atmospheric pressure was by 22-67% and 26-37% for mixed and secondary sludge, respectively. Besides, mean particle diameter (D[4,3]) of the three sludge types decreased respectively from 408, 117, and 110 μm to about 94-97, 37-42, and 36-40 μm regardless of sonication conditions, and the size reduction process was much faster than COD extraction. Copyright © 2013 Elsevier B.V. All rights reserved.
Pore size engineering applied to starved electrochemical cells and batteries
NASA Technical Reports Server (NTRS)
Abbey, K. M.; Thaller, L. H.
1982-01-01
To maximize performance in starved, multiplate cells, the cell design should rely on techniques which widen the volume tolerance characteristics. These involve engineering capillary pressure differences between the components of an electrochemical cell and using these forces to promote redistribution of electrolyte to the desired optimum values. This can be implemented in practice by prescribing pore size distributions for porous back-up plates, reservoirs, and electrodes. In addition, electrolyte volume management can be controlled by incorporating different pore size distributions into the separator. In a nickel/hydrogen cell, the separator must contain pores similar in size to the small pores of both the nickel and hydrogen electrodes in order to maintain an optimum conductive path for the electrolyte. The pore size distributions of all components should overlap in such a way as to prevent drying of the separator and/or flooding of the hydrogen electrode.
Observations of the directional distribution of the wind energy input function over swell waves
NASA Astrophysics Data System (ADS)
Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.
2016-02-01
Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
A survey of the state of the art and focused research in range systems, task 2
NASA Technical Reports Server (NTRS)
Yao, K.
1986-01-01
Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.
NASA Technical Reports Server (NTRS)
Deng, Yue
2014-01-01
Describes solar energy inputs contributing to ionospheric and thermospheric weather processes, including total energy amounts, distributions and the correlation between particle precipitation and Poynting flux.
Sun, Jian-Yi; Du, Jie; Qian, Li-Chun; Jing, Ming-Yan; Weng, Xiao-Yan
2007-08-01
Distribution and properties of the main digestive enzymes including protease and amylase, from stomach, pancreas and the anterior, middle and posterior intestine of the adult red-eared slider turtle Trachemys scripta elegans were studied at various pHs and temperatures. The optimum temperature and pH for protease in stomach, pancreas and the anterior, middle and posterior intestine were 40 degrees C, 2.5; 50 degrees C, 8.0; 50 degrees C, 7.0; 50 degrees C, 8.0; and 50 degrees C, 8.5; respectively. The optimum temperature and pH for amylase in stomach, pancreas and anterior, middle and posterior intestine were 40 degrees C, 8.0; 30 degrees C, 7.5; 40 degrees C, 7.0; 50 degrees C, 8.0; and 50 degrees C, 8.0; respectively. Under the optimum conditions, the order of protease activity from high to low was of pancreas, stomach and the anterior, posterior and middle intestine; the activity of amylase in descending order was of anterior intestine, pancreas, posterior intestine, middle intestine and stomach.
Influence of the UV-induced fiber loss on the distributed feedback fiber lasers
NASA Astrophysics Data System (ADS)
Fan, Wei; Chen, Bai; Qiao, Qiquan; Chen, Jialing; Lin, Zunqi
2003-06-01
It was found that the output power of the distributed feedback fiber lasers would be improved after annealing or left unused for several days after the laser had been fabricated, and the output of the fundamental mode would not increase but be clamped while the ±1 order modes would be predominant with the enhancement of the coupling coefficient during the fabrication. The paper discussed the influence of UV-induced fiber loss on the fiber phase-shifted DFB lasers. Due to the gain saturation and fiber internal loss, which included the temperament loss and permanent loss, there was an optimum coupling coefficient for the DFB fiber lasers that the higher internal fiber loss corresponded to the lower optimum values of coupling coefficient.
NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems
NASA Technical Reports Server (NTRS)
Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)
1994-01-01
Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.
Coordination of heterogeneous nonlinear multi-agent systems with prescribed behaviours
NASA Astrophysics Data System (ADS)
Tang, Yutao
2017-10-01
In this paper, we consider a coordination problem for a class of heterogeneous nonlinear multi-agent systems with a prescribed input-output behaviour which was represented by another input-driven system. In contrast to most existing multi-agent coordination results with an autonomous (virtual) leader, this formulation takes possible control inputs of the leader into consideration. First, the coordination was achieved by utilising a group of distributed observers based on conventional assumptions of model matching problem. Then, a fully distributed adaptive extension was proposed without using the input of this input-output behaviour. An example was given to verify their effectiveness.
Optimum structural design based on reliability and proof-load testing
NASA Technical Reports Server (NTRS)
Shinozuka, M.; Yang, J. N.
1969-01-01
Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Allocation of control rights in the PPP Project: a cooperative game model
NASA Astrophysics Data System (ADS)
Zhang, Yunhua; Feng, Jingchun; Yang, Shengtao
2017-06-01
Reasonable allocation of control rights is the key to the success of Public-Private Partnership (PPP) projects. PPP are services or ventures which are financed and operated through cooperation between governmental and private sector actors and which involve reasonable control rights sharing between these two partners. After professional firm with capital and technology as a shareholder participating in PPP project firms, the PPP project is diversified in participants and input resources. Meanwhile the allocation of control rights of PPP project tends to be complicated. According to the diversification of participants and input resources of PPP projects, the key participants are divided into professional firms and pure investors. Based on the cost of repurchase of different input resources in markets, the cooperative game relationship between these two parties is analyzed, on the basis of which the allocation model of the cooperative game for control rights is constructed to ensure optimum allocation ration of control rights and verify the share of control rights in proportion to the cost of repurchase.
Multi input single output model predictive control of non-linear bio-polymerization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugasamy, Senthil Kumar; Ahmad, Z.
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less
NASA Astrophysics Data System (ADS)
Koliopoulos, T. C.; Koliopoulou, G.
2007-10-01
We present an input-output solution for simulating the associated behavior and optimized physical needs of an environmental system. The simulations and numerical analysis determined the accurate boundary loads and areas that were required to interact for the proper physical operation of a complicated environmental system. A case study was conducted to simulate the optimum balance of an environmental system based on an artificial intelligent multi-interacting input-output numerical scheme. The numerical results were focused on probable further environmental management techniques, with the objective of minimizing any risks and associated environmental impact to protect the quality of public health and the environment. Our conclusions allowed us to minimize the associated risks, focusing on probable cases in an emergency to protect the surrounded anthropogenic or natural environment. Therefore, the lining magnitude could be determined for any useful associated technical works to support the environmental system under examination, taking into account its particular boundary necessities and constraints.
Novel solutions to low-frequency problems with geometrically designed beam-waveguide systems
NASA Technical Reports Server (NTRS)
Imbriale, W. A.; Esquivel, M. S.; Manshadi, F.
1995-01-01
The poor low-frequency performance of geometrically designed beam-waveguide (BWG) antennas is shown to be caused by the diffraction phase centers being far from the geometrical optics mirror focus, resulting in substantial spillover and defocusing loss. Two novel solutions are proposed: (1) reposition the mirrors to focus low frequencies and redesign the high frequencies to utilize the new mirror positions, and (2) redesign the input feed system to provide an optimum solution for the low frequency. A novel use of the conjugate phase-matching technique is utilized to design the optimum low-frequency feed system, and the new feed system has been implemented in the JPL research and development BWG as part of a dual S-/X-band (2.3 GHz/8.45 GHz) feed system. The new S-band feed system is shown to perform significantly better than the original geometrically designed system.
Equivalent circuit and optimum design of a multilayer laminated piezoelectric transformer.
Dong, Shuxiang; Carazo, Alfredo Vazquez; Park, Seung Ho
2011-12-01
A multilayer laminated piezoelectric Pb(Zr(1-x)Ti(x))O(3) (PZT) ceramic transformer, operating in a half- wavelength longitudinal resonant mode (λ/2 mode), has been analyzed. This piezoelectric transformer is composed of one thickness-polarized section (T-section) for exciting the longitudinal mechanical vibrations, two longitudinally polarized sections (L-section) for generating high-voltage output, and two insulating layers laminated between the T-section and L-section layers to provide insulation between the input and output sections. Based on the piezoelectric constitutive and motion equations, an electro-elasto-electric (EEE) equivalent circuit has been developed, and correspondingly, an effective EEE coupling coefficient was proposed for optimum design of this multilayer transformer. Commercial finite element analysis software is used to determine the validity of the developed equivalent circuit. Finally, a prototype sample was manufactured and experimental data was collected to verify the model's validity.
NASA Astrophysics Data System (ADS)
Mao, Mingzhi; Qian, Chen; Cao, Bingyao; Zhang, Qianwu; Song, Yingxiong; Wang, Min
2017-09-01
A digital signal process enabled dual-drive Mach-Zehnder modulator (DD-MZM)-based spectral converter is proposed and extensively investigated to realize dynamically reconfigurable and high transparent spectral conversion. As another important innovation point of the paper, to optimize the converter performance, the optimum operation conditions of the proposed converter are deduced, statistically simulated, and experimentally verified. The optimum conditions supported-converter performances are verified by detail numerical simulations and experiments in intensity-modulation and direct-detection-based network in terms of frequency detuning range-dependent conversion efficiency, strict operation transparency for user signal characteristics, impact of parasitic components on the conversion performance, as well as the converted component waveform are almost nondistortion. It is also found that the converter has the high robustness to the input signal power, optical signal-to-noise ratio variations, extinction ratio, and driving signal frequency.
Optimum design of hybrid phase locked loops
NASA Technical Reports Server (NTRS)
Lee, P.; Yan, T.
1981-01-01
The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.
Multiple Objective Evolution Strategies (MOES): A User’s Guide to Running the Software
2014-11-01
L2-norm distance is computed in parameter space between each pair of solutions in the elite population and tested against the tolerance Dclone, which...the most efficient solutions to the test problems in the Input_Files directory. The developers recommend using mu,kappa,lambda. The mu,kappa,lambda...be used as a sanity test for complicated multimodal problems. Whenever the optimum cannot be reached by a local search, the evolutionary results
Fan, Zhen; Calsolaro, Valeria; Atkinson, Rebecca A; Femminella, Grazia D; Waldman, Adam; Buckley, Christopher; Trigg, William; Brooks, David J; Hinz, Rainer; Edison, Paul
2016-11-01
Neuroinflammation is associated with neurodegenerative disease. PET radioligands targeting the 18-kDa translocator protein (TSPO) have been used as in vivo markers of neuroinflammation, but there is an urgent need for novel probes with improved signal-to-noise ratio. Flutriciclamide ( 18 F-GE180) is a recently developed third-generation TSPO ligand. In this first study, we evaluated the optimum scan duration and kinetic modeling strategies for 18 F-GE180 PET in (older) healthy controls. Ten healthy controls, 6 TSPO high-affinity binders, and 4 mixed-affinity binders were recruited. All subjects underwent detailed neuropsychologic tests, MRI, and a 210-min 18 F-GE180 dynamic PET/CT scan using metabolite-corrected arterial plasma input function. We evaluated 5 different kinetic models: irreversible and reversible 2-tissue-compartment models, a reversible 1-tissue model, and 2 models with an extra irreversible vascular compartment. The minimal scan duration was established using 210-min scan data. The feasibility of generating parametric maps was also investigated using graphical analysis. 18 F-GE180 concentration was higher in plasma than in whole blood during the entire scan duration. The volume of distribution (V T ) was 0.17 in high-affinity binders and 0.12 in mixed-affinity binders using the kinetic model. The model that best represented brain 18 F-GE180 kinetics across regions was the reversible 2-tissue-compartment model (2TCM4k), and 90 min resulted as the optimum scan length required to obtain stable estimates. Logan graphical analysis with arterial input function gave a V T highly consistent with V T in the kinetic model, which could be used for voxelwise analysis. We report for the first time, to our knowledge, the kinetic properties of the novel third-generation TSPO PET ligand 18 F-GE180 in humans: 2TCM4k is the optimal method to quantify the brain uptake, 90 min is the optimal scan length, and the Logan approach could be used to generate parametric maps. Although these control subjects have shown relatively low V T , the methodology presented here forms the basis for quantification for future PET studies using 18 F-GE180 in different pathologies. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Can Structural Optimization Explain Slow Dynamics of Rocks?
NASA Astrophysics Data System (ADS)
Kim, H.; Vistisen, O.; Tencate, J. A.
2009-12-01
Slow dynamics is a recovery process that describes the return to an equilibrium state after some external energy input is applied and then removed. Experimental studies on many rocks have shown that a modest acoustic energy input results in slow dynamics. The recovery process of the stiffness has consistently been found to be linear to log(time) for a wide range of geomaterials and the time constants appear to be unique to the material [TenCate JA, Shankland TJ (1996), Geophys Res Lett 23, 3019-3022]. Measurements of this nonequilibrium effect in rocks (e.g. sandstones and limestones) have been linked directly to the cement holding the individual grains together [Darling TW, TenCate JA, Brown DW, Clausen B, Vogel SC (2004), Geophys Res Lett 31, L16604], also suggesting a potential link to porosity and permeability. Noting that slow dynamics consistently returns the overall stiffness of rocks to its maximum (original) state, it is hypothesized that the original state represents the global minimum strain energy state. Consequently the slow dynamics process represents the global minimization or optimization process. Structural optimization, which has been developed for engineering design, minimises the total strain energy by rearranging the material distribution [Kim H, Querin OM, Steven GP, Xie YM (2002), Struct Multidiscip Optim 24, 441-448]. The optimization process effectively rearranges the way the material is cemented. One of the established global optimization methods is simulated annealing (SA). Derived from cooling of metal to a thermal equilibrium, SA finds an optimum solution by iteratively moving the system towards the minimum energy state with a probability of 'uphill' moves. It has been established that the global optimum can be guaranteed by applying a log(time) linear cooling schedule [Hajek B (1988, Math Ops Res, 15, 311-329]. This work presents the original study of applying SA to the maximum stiffness optimization problem. Preliminary results indicate that the maximum stiffness solutions are achieved when using log(time) linear cooling schedule. The optimization history reveals that the overall stiffness of the structure is increased linearly to log(time). The results closely resemble the slow dynamics stiffness recovery of geomaterials and support the hypothesis that the slow dynamics is an optimization process for strain energy. [Work supported by the Department of Energy through the LANL/LDRD Program].
Moon and Mars Analog Mission Activities for Mauna Kea 2012
NASA Technical Reports Server (NTRS)
Graham, Lee D.; Morris, Richard V.; Graff, Trevor G.; Yingst, R. Aileen; tenKate, I. L.; Glavin, Daniel P.; Hedlund, Magnus; Malespin, Charles A.; Mumm, Erik
2012-01-01
Rover-based 2012 Moon and Mars Analog Mission Activities (MMAMA) scientific investigations were recently completed at Mauna Kea, Hawaii. Scientific investigations, scientific input, and science operations constraints were tested in the context of an existing project and protocols for the field activities designed to help NASA achieve the Vision for Space Exploration. Initial science operations were planned based on a model similar to the operations control of the Mars Exploration Rovers (MER). However, evolution of the operations process occurred as the analog mission progressed. We report here on the preliminary sensor data results, an applicable methodology for developing an optimum science input based on productive engineering and science trades discussions and the science operations approach for an investigation into the valley on the upper slopes of Mauna Kea identified as "Apollo Valley".
Attitude profile design program
NASA Technical Reports Server (NTRS)
1991-01-01
The Attitude Profile Design (APD) Program was designed to be used as a stand-alone addition to the Simplex Computation of Optimum Orbital Trajectories (SCOOT). The program uses information from a SCOOT output file and the user defined attitude profile to produce time histories of attitude, angular body rates, and accelerations. The APD program is written in standard FORTRAN77 and should be portable to any machine that has an appropriate compiler. The input and output are through formatted files. The program reads the basic flight data, such as the states of the vehicles, acceleration profiles, and burn information, from the SCOOT output file. The user inputs information about the desired attitude profile during coasts in a high level manner. The program then takes these high level commands and executes the maneuvers, outputting the desired information.
Selection of an Optimum Air Defense Weapon Package Using MAUM (Multi-Attribute Utility Measurement).
1983-06-01
SELECTION OF AN OPTIMUM AIR DEFENSE WEAPON PACKAGE USING MAUM by Wilton L. Ham June 1983 Thesis Advisor: R. G. Nickerson Approved for public release...OSSTRIUTON STATEMEN4T (of if AlRpeat) Approved for public release; distribution unlimited. I?. 01STVAGUTgOg STATE[MENT (of me ubeh’ei antered Ian...hold": do not fire except in self defense. 4. Firing Commands. These are commands issued regard- less of the weapons control in effect. There are three
Development of Improved Design and 3D Printing Manufacture of Cross-Flow Fan Rotor
2016-06-01
the design study, each solver run was monitored. Plotting the value of the mass flows, as well as the torque on the rotor blades , allowed a simple...DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) This study determined the optimum blade stagger angle for a cross-flow fan rotor and evaluated the...parametric study determined optimum blade stagger angle using thrust, power, and thrust-to-power ratio as desired output variables. A MarkForged Mark One 3D
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
NASA Astrophysics Data System (ADS)
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah
2015-11-05
It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta
2013-11-01
A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.
Tillage and Water Deficit Stress Effects on Corn (Zea mays, L.) Root Distribution
USDA-ARS?s Scientific Manuscript database
One goal of soil management is to provide optimum conditions for root growth. Corn root distributions were measured in 2004 from a crop rotation – tillage experiment that was started in 2000. Corn was grown either following corn or following sunflower with either no till or deep chisel tillage. Wate...
Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven
2010-05-01
Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.
Information Theory and the Earth's Density Distribution
NASA Technical Reports Server (NTRS)
Rubincam, D. P.
1979-01-01
An argument for using the information theory approach as an inference technique in solid earth geophysics. A spherically symmetric density distribution is derived as an example of the method. A simple model of the earth plus knowledge of its mass and moment of inertia lead to a density distribution which was surprisingly close to the optimum distribution. Future directions for the information theory approach in solid earth geophysics as well as its strengths and weaknesses are discussed.
2017-03-01
power level (5-10W) transmitters. The designs are analyzed and compared with respect to non-idealities such as bondwire effects and input signal duty...Hence, sub-optimum class- E/inverse class-E designs were implemented in this work and compared with respect to reduced duty cycle performance...inverse class E PA achieves 61.5% efficiency for medium power levels (37.7dBm) at 880MHz. The three designed PAs have been compared with respect to
Research Study Towards a MEFFV Electric Armament System
2004-01-01
CHPSPerf Inputs Parameter Setting Engine Power (kW) 500 per engine Generator Power (kW) 500/generator Traction Motors Power (kW) 500/side # Battery Pack...Cells in Parallel 2 # Motors in Drive Train 2 Max Power of Traction Motors 200 Minimum Engine Power (kW) 50 Optimum Engine Power (kW) 750 Stop... motors . Other options were examined for the energy storage system. Of particular interest in this regard is the use of the CPA flywheel as the load
The effects of Poynting-Robertson drag on solar sails
NASA Astrophysics Data System (ADS)
Abd El-Salam, F. A.
2018-06-01
In the present work, the concept of solar sailing and its developing spacecraft are presented. The effects of Poynting-Robertson drag on solar sails are considered. Some analytical control laws with some mentioned input constraints for optimizing solar sails dynamics in heliocentric orbit using Lagrange's planetary equations are obtained. Optimum force vector in a required direction is maximized by deriving optimal sail cone angle. New control laws that maximize thrust to obtain certain required maximization in some particular orbital element are obtained.
Correlator optical wavefront sensor COWS
NASA Astrophysics Data System (ADS)
1991-02-01
This report documents the significant upgrades and improvements made to the correlator optical wavefront sensor (COWS) optical bench during this phase of the program. Software for the experiment was reviewed and documented. Flowcharts showing the program flow are included as well as documentation for programs which were written to calculate and display Zernike polynomials. The system was calibrated and aligned and a series of experiments to determine the optimum settings for the input and output MOSLM polarizers were conducted. In addition, design of a simple aberration generation is included.
Optimum dry-cooling sub-systems for a solar air conditioner
NASA Technical Reports Server (NTRS)
Chen, J. L. S.; Namkoong, D.
1978-01-01
Dry-cooling sub-systems for residential solar powered Rankine compression air conditioners were economically optimized and compared with the cost of a wet cooling tower. Results in terms of yearly incremental busbar cost due to the use of dry-cooling were presented for Philadelphia and Miami. With input data corresponding to local weather, energy rate and capital costs, condenser surface designs and performance, the computerized optimization program yields design specifications of the sub-system which has the lowest annual incremental cost.
Helicopter vibration suppression using simple pendulum absorbers on the rotor blade
NASA Technical Reports Server (NTRS)
Pierce, G. A.; Hanouva, M. N. H.
1982-01-01
A comprehensive anaytical design procedure for the installation of simple pendulums on the blades of a helicopter rotor to suppress the root reactions is presented. A frequency response anaysis is conducted of typical rotor blades excited by a harmonic variation of spanwise airload distributions as well as a concentrated load at the tip. The results presented included the effect of pendulum tuning on the minimization of the hub reactions. It is found that a properly designed flapping pendulum attenuates the root out-of-plane force and moment whereas the optimum designed lead-lag pendulum attenuates the root in-plane reactions. For optimum pendulum tuning the parameters to be determined are the pendulum uncoupled natural frequency, the pendulum spanwise location and its mass. It is found that the optimum pendulum frequency is in the vicinity of the excitation frequency. For the optimum pendulum a parametric study is conducted. The parameters varied include prepitch, pretwist, precone and pendulum hinge offset.
Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tegen, S.
This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less
Weber, A J; Stanford, L R
1994-05-15
It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.
A Neural Network Aero Design System for Advanced Turbo-Engines
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1999-01-01
An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.
Experiments and modeling of dilution jet flow fields
NASA Technical Reports Server (NTRS)
Holdeman, James D.
1986-01-01
Experimental and analytical results of the mixing of single, double, and opposed rows of jets with an isothermal or variable-temperature main stream in a straight duct are presented. This study was performed to investigate flow and geometric variations typical of the complex, three-dimensional flow field in the dilution zone of gas-turbine-engine combustion chambers. The principal results, shown experimentally and analytically, were the following: (1) variations in orifice size and spacing can have a significant effect on the temperature profiles; (2) similar distributions can be obtained, independent of orifice diameter, if momentum-flux ratio and orifice spacing are coupled; (3) a first-order approximation of the mixing of jets with a variable-temperature main stream can be obtained by superimposing the main-stream and jets-in-an-isothermal-crossflow profiles; (4) the penetration of jets issuing mixing is slower and is asymmetric with respect to the jet centerplanes, which shift laterally with increasing downstream distance; (5) double rows of jets give temperature distributions similar to those from a single row of equally spaced, equal-area circular holes; (6) for opposed rows of jets, with the orifice centerlines in line, the optimum ratio of orifice spacing to duct height is one-half the optimum value for single-side injection at the same momentum-flux ratiol and (7) for opposed rows of jets, with the orifice centerlines staggered, the optimum ratio of orifice spacing to duct height is twice the optimum value for single-side injection at the same momentum-flux ratio.
Sharifi Dehsari, Hamed; Harris, Richard Anthony; Ribeiro, Anielen Halda; Tremel, Wolfgang; Asadi, Kamal
2018-06-05
Despite the great progress in the synthesis of iron oxide nanoparticles (NPs) using a thermal decomposition method, the production of NPs with low polydispersity index is still challenging. In a thermal decomposition synthesis, oleic acid (OAC) and oleylamine (OAM) are used as surfactants. The surfactants bind to the growth species, thereby controlling the reaction kinetics and hence playing a critical role in the final size and size distribution of the NPs. Finding an optimum molar ratio between the surfactants oleic OAC/OAM is therefore crucial. A systematic experimental and theoretical study, however, on the role of the surfactant ratio is still missing. Here, we present a detailed experimental study on the role of the surfactant ratio in size distribution. We found an optimum OAC/OAM ratio of 3 at which the synthesis yielded truly monodisperse (polydispersity less than 7%) iron oxide NPs without employing any post synthesis size-selective procedures. We performed molecular dynamics simulations and showed that the binding energy of oleate to the NP is maximized at an OAC/OAM ratio of 3. The optimum OAC/OAM ratio of 3 is allowed for the control of the NP size with nanometer precision by simply changing the reaction heating rate. The optimum OAC/OAM ratio has no influence on the crystallinity and the superparamagnetic behavior of the Fe 3 O 4 NPs and therefore can be adopted for the scaled-up production of size-controlled monodisperse Fe 3 O 4 NPs.
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...
2017-05-17
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte
This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less
Gilsenan, M B; Lambe, J; Gibney, M J
2003-11-01
A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.
NASA Technical Reports Server (NTRS)
Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.
1998-01-01
Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.
Optimum Temperature for Storage of Fruit and Vegetables with Reference to Chilling Injury
NASA Astrophysics Data System (ADS)
Murata, Takao
Cold storage is an important technique for preserving fresh fruit and vegetables. Deterioration due to ripening, senescence and microbiological disease can be retarded by storage at optimum temperature being slightly above the freezing point of tissues of fruit and vegetables. However, some fruit and vegetables having their origins in tropical or subtropical regions of the world are subject to chilling injury during transportation, storage and wholesale distribution at low temperature above freezing point, because they are usually sensitive to low temperature in the range of 15&digC to 0°C. This review will focus on the recent informations regarding chilling injury of fruit and vegetables, and summarize the optimum temperature for transportation and storage of fruit and vegetables in relation to chilling injury.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Fluid Therapy and Outcome: Balance Is Best
Allen, Sara J.
2014-01-01
Abstract: The use of intravenous fluids is routine in patients undergoing surgery or critical illness; however, controversy still exists regarding optimum fluid therapy. Recent literature has examined the effects of different types, doses, and timing of intravenous fluid therapy. Each of these factors may influence patient outcomes. Crystalloids consist of isotonic saline or balanced electrolyte solutions and widely distribute across extracellular fluid compartments, whereas colloids contain high-molecular-weight molecules suspended in crystalloid carrier solution and do not freely distribute across the extracellular fluid compartments. Colloids vary in composition and associated potential adverse effects. Recent evidence has highlighted safety and ethical concerns regarding the use of colloid solutions in critically ill patients, particularly the use of synthetic starch solutions. which have been associated with increased morbidity and mortality. Crystalloid solutions with a chloride-rich composition (e.g., isotonic saline) have been associated with metabolic acidosis, hyperchloremia, increased incidence of acute kidney injury, and increased requirement for renal replacement therapy. An optimum dose of intravenous fluids remains controversial with no definitive evidence to support restrictive versus liberal approaches. Further high-quality trials are needed to elucidate the optimum fluid therapy for patients, but currently a balanced approach to type, dose, and timing of fluids is recommended. PMID:24779116
Fluid therapy and outcome: balance is best.
Allen, Sara J
2014-03-01
The use of intravenous fluids is routine in patients undergoing surgery or critical illness; however, controversy still exists regarding optimum fluid therapy. Recent literature has examined the effects of different types, doses, and timing of intravenous fluid therapy. Each of these factors may influence patient outcomes. Crystalloids consist of isotonic saline or balanced electrolyte solutions and widely distribute across extracellular fluid compartments, whereas colloids contain high-molecular-weight molecules suspended in crystalloid carrier solution and do not freely distribute across the extracellular fluid compartments. Colloids vary in composition and associated potential adverse effects. Recent evidence has highlighted safety and ethical concerns regarding the use of colloid solutions in critically ill patients, particularly the use of synthetic starch solutions, which have been associated with increased morbidity and mortality. Crystalloid solutions with a chloride-rich composition (e.g., isotonic saline) have been associated with metabolic acidosis, hyperchloremia, increased incidence of acute kidney injury, and increased requirement for renal replacement therapy. An optimum dose of intravenous fluids remains controversial with no definitive evidence to support restrictive versus liberal approaches. Further high-quality trials are needed to elucidate the optimum fluid therapy for patients, but currently a balanced approach to type, dose, and timing of fluids is recommended.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
NASA Astrophysics Data System (ADS)
Song, Rui-Zhuo; Xiao, Wen-Dong; Wei, Qing-Lai
2014-05-01
We develop an online adaptive dynamic programming (ADP) based optimal control scheme for continuous-time chaotic systems. The idea is to use the ADP algorithm to obtain the optimal control input that makes the performance index function reach an optimum. The expression of the performance index function for the chaotic system is first presented. The online ADP algorithm is presented to achieve optimal control. In the ADP structure, neural networks are used to construct a critic network and an action network, which can obtain an approximate performance index function and the control input, respectively. It is proven that the critic parameter error dynamics and the closed-loop chaotic systems are uniformly ultimately bounded exponentially. Our simulation results illustrate the performance of the established optimal control method.
Snowmelt-runoff Model Utilizing Remotely-sensed Data
NASA Technical Reports Server (NTRS)
Rango, A.
1985-01-01
Remotely sensed snow cover information is the critical data input for the Snowmelt-Runoff Model (SRM), which was developed to simulatke discharge from mountain basins where snowmelt is an important component of runoff. Of simple structure, the model requires only input of temperature, precipitation, and snow covered area. SRM was run successfully on two widely separated basins. The simulations on the Kings River basin are significant because of the large basin area (4000 sq km) and the adequate performance in the most extreme drought year of record (1976). The performance of SRM on the Okutadami River basin was important because it was accomplished with minimum snow cover data available. Tables show: optimum and minimum conditions for model application; basin sizes and elevations where SRM was applied; and SRM strengths and weaknesses. Graphs show results of discharge simulation.
Transversely bounded DFB lasers. [bounded distributed-feedback lasers
NASA Technical Reports Server (NTRS)
Elachi, C.; Evans, G.; Yeh, C.
1975-01-01
Bounded distributed-feedback (DFB) lasers are studied in detail. Threshold gain and field distribution for a number of configurations are derived and analyzed. More specifically, the thin-film guide, fiber, diffusion guide, and hollow channel with inhomogeneous-cladding DFB lasers are considered. Optimum points exist and must be used in DFB laser design. Different-modes feedback and the effects of the transverse boundaries are included. A number of applications are also discussed.
Design of novel dual-port tapered waveguide plasma apparatus by numerical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D.; Zhou, R.; Yang, X. Q., E-mail: yyxxqq-mail@163.com
Microwave plasma apparatus is often of particular interest due to their superiority of low cost, electrode contamination free, and suitability for industrial production. However, there exist problems of unstable plasma and low electron density in conventional waveguide apparatus based on single port, due to low strength and non-uniformity of microwave field. This study proposes a novel dual-port tapered waveguide plasma apparatus based on power-combining technique, to improve the strength and uniformity of microwave field for the applications of plasma. A 3D model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. On the condition thatmore » the total input power is 500 W, simulations indicate that coherent power-combining will maximize the electric-field strength to 3.32 × 10{sup 5 }V/m and improve the uniformity of distributed microwave field, which raised 36.7% and 47.2%, respectively, compared to conventional waveguide apparatus of single port. To study the optimum conditions for industrial application, a 2D argon fluid model based on above structure is presented. It demonstrates that relatively uniform and high-density plasma is obtained at an argon flow rate of 200 ml/min. The contrastive result of electric-field distribution, electron density, and gas temperature is also valid and clearly proves the superiority of coherent power-combining to conventional technique in flow field.« less
NASA Astrophysics Data System (ADS)
Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda
2009-02-01
The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.
Vapor chamber with hollow condenser tube heat sink
NASA Astrophysics Data System (ADS)
Ong, K. S.; Haw, P. L.; Lai, K. C.; Tan, K. H.
2017-04-01
Heat pipes are heat transfer devices capable of transferring large quantities of heat effectively and efficiently. A vapor chamber (VC) is a flat heat pipe. A novel VC with hollow condenser tubes embedded on the top of it is proposed. This paper reports on the experimental thermal performance of three VC devices embedded with hollow tubes and employed as heat sinks. The first device consisted of a VC with a single hollow tube while the other two VCs had an array of multi-tubes with different tube lengths. All three devices were tested under natural and force air convection cooling. An electrical resistance heater was employed to provide power inputs of 10 and 40 W. Surface temperatures were measured with thermocouple probes at different locations around the devices. The results show that temperatures increased with heater input while total device thermal resistances decreased. Force convection results in lower temperatures and lower resistance. Dry-out occurs at high input power and with too much condensing area. There appears to be an optimum fill ratio which depended upon dimensions of the VC and also heating power.
Dispersion of carbon nanotubes in vinyl ester polymer composites
NASA Astrophysics Data System (ADS)
Pena-Paras, Laura
This work focused on a parametric study of dispersions of different types of carbon nanotubes in a polymer resin. Single-walled (SWNTs), double-walled (DWNTs), multi-walled (MWNTs) and XD-grade carbon nanotubes (XD-CNTs) were dispersed in vinyl ester (VE) using an ultra-sonic probe at a fixed frequency. The power, amplitude, and mixing time parameters of sonication were correlated to the electrical and mechanical properties of the composite materials in order to optimize dispersion. The quality of dispersion was quantified by Raman spectroscopy and verified through optical and scanning electron microscopy. By Raman, the CNT distribution, unroping, and damage was monitored and correlated with the composite properties for dispersion optimization. Increasing the ultrasonication energy was found to improve the distribution of all CNT materials and to decrease the size of nanotube ropes, enhancing the electrical conductivity and storage modulus. However, excessive amounts of energy were found to damage CNTs, which negatively affected the properties of the composite. Based on these results the optimum dispersion energy inputs were determined for the different composite materials. The electrical resistivity was lowered by as much as 14, 13, 13, and 11 orders of magnitude for SWNT/VE, DWNT/VE, MWNT/VE, and XD-CNT/VE respectively, compared to the neat resin. The storage modulus was also increased compared to the neat resin by 77%, 82%, 45%, 40% and 85% in SWNT, SAP-f-SWNT, DWNT, MWNT and XD-CNT/VE composites, respectively. This study provides a detailed understanding of how the properties of, nanocomposites are determined by the composite mixing parameters and the distribution, concentration, shape and size of the CNTs. Importantly, it indicates the importance of the need for dispersion metrics to correlate and understand these properties.
NASA Astrophysics Data System (ADS)
Yildiz, Mehmet Serhan; Celik, Murat
2017-04-01
Microwave electrothermal thruster (MET), an in-space propulsion concept, uses an electromagnetic resonant cavity as a heating chamber. In a MET system, electromagnetic energy is converted to thermal energy via a free floating plasma inside a resonant cavity. To optimize the power deposition inside the cavity, the factors that affect the electric field distribution and the resonance conditions must be accounted for. For MET thrusters, the length of the cavity, the dielectric plate that separates the plasma zone from the antenna, the antenna length and the formation of a free floating plasma have direct effects on the electromagnetic wave transmission and thus the power deposition. MET systems can be tuned by adjusting the lengths of the cavity or the antenna. This study presents the results of a 2-D axis symmetric model for the investigation of the effects of cavity length, antenna length, separation plate thickness, as well as the presence of free floating plasma on the power absorption. Specifically, electric field distribution inside the resonant cavity is calculated for a prototype MET system developed at the Bogazici University Space Technologies Laboratory. Simulations are conducted for a cavity fed with a constant power input of 1 kW at 2.45 GHz using COMSOL Multiphysics commercial software. Calculations are performed for maximum plasma electron densities ranging from 1019 to 1021 #/m3. It is determined that the optimum antenna length changes with changing plasma density. The calculations show that over 95% of the delivered power can be deposited to the plasma when the system is tuned by adjusting the cavity length.
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
NASA Astrophysics Data System (ADS)
Shi, Jiyong; Chen, Wu; Zou, Xiaobo; Xu, Yiwei; Huang, Xiaowei; Zhu, Yaodi; Shen, Tingting
2018-01-01
Hyperspectral images (431-962 nm) and partial least squares (PLS) were used to detect the distribution of triterpene acids within loquat (Eriobotrya japonica) leaves. 72 fresh loquat leaves in the young group, mature group and old group were collected for hyperspectral imaging; and triterpene acids content of the loquat leaves was analyzed using high performance liquid chromatography (HPLC). Then the spectral data of loquat leaf hyperspectral images and the triterpene acids content were employed to build calibration models. After spectra pre-processing and wavelength selection, an optimum calibration model (Rp = 0.8473, RMSEP = 2.61 mg/g) for predicting triterpene acids was obtained by synergy interval partial least squares (siPLS). Finally, spectral data of each pixel in the loquat leaf hyperspectral image were extracted and substituted into the optimum calibration model to predict triterpene acids content of each pixel. Therefore, the distribution map of triterpene acids content was obtained. As shown in the distribution map, triterpene acids are accumulated mainly in the leaf mesophyll regions near the main veins, and triterpene acids concentration of young group is less than that of mature and old groups. This study showed that hyperspectral imaging is suitable to determine the distribution of active constituent content in medical herbs in a rapid and non-invasive manner.
NASA Astrophysics Data System (ADS)
Biswas, R.; Kuar, A. S.; Mitra, S.
2014-09-01
Nd:YAG laser microdrilled holes on gamma-titanium aluminide, a newly developed alloy having wide applications in turbine blades, engine valves, cases, metal cutting tools, missile components, nuclear fuel and biomedical engineering, are important from the dimensional accuracy and quality of hole point of view. Keeping this in mind, a central composite design (CCD) based on response surface methodology (RSM) is employed for multi-objective optimization of pulsed Nd:YAG laser microdrilling operation on gamma-titanium aluminide alloy sheet to achieve optimum hole characteristics within existing resources. The three characteristics such as hole diameter at entry, hole diameter at exit and hole taper have been considered for simultaneous optimization. The individual optimization of all three responses has also been carried out. The input parameters considered are lamp current, pulse frequency, assist air pressure and thickness of the job. The responses at predicted optimum parameter level are in good agreement with the results of confirmation experiments conducted for verification tests.
NASA Astrophysics Data System (ADS)
López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio
2017-12-01
Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.
Launders, J H; McArdle, S; Workman, A; Cowen, A R
1995-01-01
The significance of varying the viewing conditions that may affect the perceived threshold contrast of X-ray television fluoroscopy systems has been investigated. Factors investigated include the ambient room lighting and the viewing distance. The purpose of this study is to find the optimum viewing protocol with which to measure the threshold detection index. This is a particular problem when trying to compare the image quality of television fluoroscopy systems in different input field sizes. The results show that the viewing distance makes a significant difference to the perceived threshold contrast, whereas the ambient light conditions make no significant difference. Experienced observers were found to be capable of finding the optimum viewing distance for detecting details of each size, in effect using a flexible viewing distance. This allows the results from different field sizes to be normalized to account for both the magnification and the entrance air kerma rate differences, which in turn allow for a direct comparison of performance in different field sizes.
ANFIS multi criteria decision making for overseas construction projects: a methodology
NASA Astrophysics Data System (ADS)
Utama, W. P.; Chan, A. P. C.; Zulherman; Zahoor, H.; Gao, R.; Jumas, D. Y.
2018-02-01
A critical part when a company targeting a foreign market is how to make a better decision in connection with potential project selection. Since different attributes of information are often incomplete, imprecise and ill-defined in overseas projects selection, the process of decision making by relying on the experiences and intuition is a risky attitude. This paper aims to demonstrate a decision support method in deciding overseas construction projects (OCPs). An Adaptive Neuro-Fuzzy Inference System (ANFIS), the amalgamation of Neural Network and Fuzzy Theory, was used as decision support tool to decide to go or not go on OCPs. Root mean square error (RMSE) and coefficient of correlation (R) were employed to identify the ANFIS system indicating an optimum and efficient result. The optimum result was obtained from ANFIS network with two input membership functions, Gaussian membership function (gaussmf) and hybrid optimization method. The result shows that ANFIS may help the decision-making process for go/not go decision in OCPs.
Jäger, B
1983-09-01
The technology of composting must guarantee the material-chemical, biological and physical-technical reaction conditions essential for the rotting process. In this, the constituents of the input material and the C/N ratio play an important role. Maintaining optimum decomposition conditions is rendered difficult by the fact that the physical-technical reaction parameters partly exclude each other. These are: optimum humidity, adequate air/oxygen supply, large active surface, loose structure with sufficient decomposition volume. The processing of the raw refuse required to maintain the physical-technical reaction parameters can be carried out either by the conventional method of preliminary fragmentizing, sieving and mixing or else in conjunction with separating recycling in adapted systems. The latter procedure obviates some drawbacks which mainly result from the high expenditure required for preliminary fragmentation of the raw refuse. Moreover, presorting affords the possibility of reducing the heavy-metal content of the organic composing fraction and this approaches a solution to the noxa disposal problem which at present stands in the way of being accepted as an ecological waste disposal method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, S. S.; Murtaza, Ghulam; Zakaullah, M.
Correlation of neutron emission with pinch energy for a Mather-type plasma focus energized by a single capacitor 12.5 muF, 21 kV (2.7 kJ) is investigated by employing time resolved and time integrated detectors for two different anode shapes. The maximum average neutron yield of about 1.3x10{sup 8} per shot is recorded with cylindrical anode, that increases to 1.6x10{sup 8} per shot for tapered anode. At optimum pressure the input energy converted to pinch energy is about 24% for cylindrical anode as compared to 36% for tapered anode. It is found that the tapered anode enhances neutron flux about 25+-5% bothmore » in axial and radial directions and also broadens the pressure range for neutron emission as well as pinch energy. The neutron yield and optimum gas filling pressures are found strongly dependent on the anode shape.« less
NASA Technical Reports Server (NTRS)
Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.
1986-01-01
Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.
NASA Astrophysics Data System (ADS)
Soomere, Tarmo; Berezovski, Mihhail; Quak, Ewald; Viikmäe, Bert
2011-10-01
We address possibilities of minimising environmental risks using statistical features of current-driven propagation of adverse impacts to the coast. The recently introduced method for finding the optimum locations of potentially dangerous activities (Soomere et al. in Proc Estonian Acad Sci 59:156-165, 2010) is expanded towards accounting for the spatial distributions of probabilities and times for reaching the coast for passively advecting particles released in different sea areas. These distributions are calculated using large sets of Lagrangian trajectories found from Eulerian velocity fields provided by the Rossby Centre Ocean Model with a horizontal resolution of 2 nautical miles for 1987-1991. The test area is the Gulf of Finland in the northeastern Baltic Sea. The potential gain using the optimum fairways from the Baltic Proper to the eastern part of the gulf is an up to 44% decrease in the probability of coastal pollution and a similar increase in the average time for reaching the coast. The optimum fairways are mostly located to the north of the gulf axis (by 2-8 km on average) and meander substantially in some sections. The robustness of this approach is quantified as the typical root mean square deviation (6-16 km) between the optimum fairways specified from different criteria. Drastic variations in the width of the `corridors' for almost optimal fairways (2-30 km for the average width of 15 km) signifies that the sensitivity of the results with respect to small changes in the environmental criteria largely varies in different parts of the gulf.
A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen Vernon; Moyer, Robert D.
2005-05-01
Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less
Combining control input with flight path data to evaluate pilot performance in transport aircraft.
Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney
2008-11-01
When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.
On the optimum signal constellation design for high-speed optical transport networks.
Liu, Tao; Djordjevic, Ivan B
2012-08-27
In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.
1978-01-01
Advantagas possessed tage mast be high enough to effectively couple energy by water include the self - healing nature of the di- into the excimer gas mix, which...optimum input chosen aggregate of section self -inductances and mutual inductance between sections was module parameters and the Rayleigh module upset...6b0 cm in diam- eter and 6.86 cm long. A solid copper wire with the same number of circular mls has a diameter of 0.583 cm. The self -inductance o 60
Soliton propagation in tapered silicon core fibers.
Peacock, Anna C
2010-11-01
Numerical simulations are used to investigate soliton-like propagation in tapered silicon core optical fibers. The simulations are based on a realistic tapered structure with nanoscale core dimensions and a decreasing anomalous dispersion profile to compensate for the effects of linear and nonlinear loss. An intensity misfit parameter is used to establish the optimum taper dimensions that preserve the pulse shape while reducing temporal broadening. Soliton formation from Gaussian input pulses is also observed--further evidence of the potential for tapered silicon fibers to find use in a range of signal processing applications.
Optimization of an integrated wavelength monitor device
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Brambilla, Gilberto; Semenova, Yuliya; Wu, Qiang; Farrell, Gerald
2011-05-01
In this paper an edge filter based on multimode interference in an integrated waveguide is optimized for a wavelength monitoring application. This can also be used as a demodulation element in a fibre Bragg grating sensing system. A global optimization algorithm is presented for the optimum design of the multimode interference device, including a range of parameters of the multimode waveguide, such as length, width and position of the input and output waveguides. The designed structure demonstrates the desired spectral response for wavelength measurements. Fabrication tolerance is also analysed numerically for this structure.
NASA Technical Reports Server (NTRS)
Tedjojuwono, Ken K.; Hunter, William W., Jr.
1989-01-01
The transmission characteristics of two Ar(+) laser wavelengths through a twenty meter Panda type Polarization Preserving Single Mode Optical Fiber (PPSMOF) were measured. The measurements were done with both single and multi-longitudinal mode radiation. In the single longitudinal mode case, a degrading Stimulated Brillouin Scattering (SBS) is observed as a backward scattering loss. By choosing an optimum coupling system and manipulating the input polarization, the threshold of the SBS onset can be raised and the transmission efficiency can be increased.
A methodology based on reduced complexity algorithm for system applications using microprocessors
NASA Technical Reports Server (NTRS)
Yan, T. Y.; Yao, K.
1988-01-01
The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.
Smith predictor with sliding mode control for processes with large dead times
NASA Astrophysics Data System (ADS)
Mehta, Utkal; Kaya, İbrahim
2017-11-01
The paper discusses the Smith Predictor scheme with Sliding Mode Controller (SP-SMC) for processes with large dead times. This technique gives improved load-disturbance rejection with optimum input control signal variations. A power rate reaching law is incorporated in the sporadic part of sliding mode control such that the overall performance recovers meaningfully. The proposed scheme obtains parameter values by satisfying a new performance index which is based on biobjective constraint. In simulation study, the efficiency of the method is evaluated for robustness and transient performance over reported techniques.
Associative memory - An optimum binary neuron representation
NASA Technical Reports Server (NTRS)
Awwal, A. A.; Karim, M. A.; Liu, H. K.
1989-01-01
Convergence mechanism of vectors in the Hopfield's neural network is studied in terms of both weights (i.e., inner products) and Hamming distance. It is shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, weights (which in turn depend on the neuron representation) are found to play a more dominant role in the convergence mechanism. Consequently, a new binary neuron representation for associative memory is proposed. With the new neuron representation, the associative memory responds unambiguously to the partial input in retrieving the stored information.
Takla, Amgad; Dorotta, Ihab; Staszak, John; Tetzlaff, John E
2007-01-01
Because of increases in the acuity in our patient population, increasing complexity of the care provided and the structure of our residency, we decided to systematically alter our participation in the hospital-wide cardiac arrest system. The need to provide optimum service in an increasingly complex clinical care system was the motivation for change. With substantive input from trainees and practitioners, we created a multi-tier-system of response along with predefined criteria for the anesthesiology response. We report the result of our practice based learning initiative.
Limit characteristics of digital optoelectronic processor
NASA Astrophysics Data System (ADS)
Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.
2018-01-01
In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.
RSM 1.0 user's guide: A resupply scheduler using integer optimization
NASA Technical Reports Server (NTRS)
Viterna, Larry A.; Green, Robert D.; Reed, David M.
1991-01-01
The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.
Answer or Publish - Energizing Online Democracy
NASA Astrophysics Data System (ADS)
Antal, Miklós; Mikecz, Dániel
Enhanced communication between citizens and decision makers furthering participation in public decision making is essential to ease today's democratic deficit. However, it is difficult to sort out the most important public inputs from a large number of comments and questions. We propose an online solution to the selection problem by utilizing the general publicity of the internet. In the envisioned practice, decision makers are obliged either to answer citizens' questions or initiatives or to publish the letter received on a publicly accessible web page. The list of unaddressed questions would mean a motivation to consider public inputs without putting unnecessary burdens on decision makers - due to the reliance on the public, their workload would converge to the societal optimum. The proposed method is analyzed in the course of the existing Hungarian e-practices. The idea is found valuable as a restriction for representatives and a relief for some other officials.
Nonlinear model predictive control for chemical looping process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Abhinaya; Lei, Hao; Lou, Xinsheng
A control system for optimizing a chemical looping ("CL") plant includes a reduced order mathematical model ("ROM") that is designed by eliminating mathematical terms that have minimal effect on the outcome. A non-linear optimizer provides various inputs to the ROM and monitors the outputs to determine the optimum inputs that are then provided to the CL plant. An estimator estimates the values of various internal state variables of the CL plant. The system has one structure adapted to control a CL plant that only provides pressure measurements in the CL loops A and B, a second structure adapted to amore » CL plant that provides pressure measurements and solid levels in both loops A, and B, and a third structure adapted to control a CL plant that provides full information on internal state variables. A final structure provides a neural network NMPC controller to control operation of loops A and B.« less
On the dynamic response at the wheel axle of a pneumatic tire
NASA Astrophysics Data System (ADS)
Kung, L. E.; Soedel, W.; Yang, T. Y.
1986-06-01
A method for calculating the steady state displacement response and force transmission at the wheel axle of a pneumatic tire-suspension system due to a steady state force or displacement excitation at the tire to ground contact point is developed. The method requires the frequency responses (or receptances)_of both tire-wheel and suspension units. The frequency response of the tire-wheel unit is obtained by using the modal expansion method. The natural frequencies and mode shapes of the tire-wheel unit are obtained by using a geometrically non-linear, ring type, thin shell finite element of laminate composite. The frequency response of the suspension unit is obtained analytically. These frequency responses are used to calculate the force-input and the displacement-input responses at the wheel axle. This method allows the freedom of designing a vehicle and its tires independently and still achieving optimum dynamic performance.
Optimum employment of satellite indirect soundings as numerical model input
NASA Technical Reports Server (NTRS)
Horn, L. H.; Derber, J. C.; Koehler, T. L.; Schmidt, B. D.
1981-01-01
The characteristics of satellite-derived temperature soundings that would significantly affect their use as input for numerical weather prediction models were examined. Independent evaluations of satellite soundings were emphasized to better define error characteristics. Results of a Nimbus-6 sounding study reveal an underestimation of the strength of synoptic scale troughs and ridges, and associated gradients in isobaric height and temperature fields. The most significant errors occurred near the Earth's surface and the tropopause. Soundings from the TIROS-N and NOAA-6 satellites were also evaluated. Results again showed an underestimation of upper level trough amplitudes leading to weaker thermal gradient depictions in satellite-only fields. These errors show a definite correlation to the synoptic flow patterns. In a satellite-only analysis used to initialize a numerical model forecast, it was found that these synoptically correlated errors were retained in the forecast sequence.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
Study on selective laser sintering of glass fiber reinforced polystyrene
NASA Astrophysics Data System (ADS)
Yang, Laixia; Wang, Bo; Zhou, Wenming
2017-12-01
In order to improve the bending strength of Polystyrene (PS) sintered parts by selective laser sintering, Polystyrene/glass fiber (PS/GF) composite powders were prepared by mechanical mixing method. The size distribution of PS/GF composite powders was characterized by laser particle size analyzer. The optimum ratio of GF was determined by proportioning sintering experiments. The influence of process parameters on the bending strength of PS and PS/GF sintered parts was studied by orthogonal test. The result indicates that the particle size of PS/GF composite powder is mainly distributed in 24.88 μm~139.8 μm. When the content of GF is 10%, it has better strengthen effect. Finally, the article used the optimum parameter of the two materials to sinter prototype, it is found that the PS/GF prototype has the advantages of good accuracy and high strength.
Optimum runway orientation relative to crosswinds
NASA Technical Reports Server (NTRS)
Falls, L. W.; Brown, S. C.
1972-01-01
Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
Grain-size considerations for optoelectronic multistage interconnection networks.
Krishnamoorthy, A V; Marchand, P J; Kiamilev, F E; Esener, S C
1992-09-10
This paper investigates, at the system level, the performance-cost trade-off between optical and electronic interconnects in an optoelectronic interconnection network. The specific system considered is a packet-switched, free-space optoelectronic shuffle-exchange multistage interconnection network (MIN). System bandwidth is used as the performance measure, while system area, system power, and system volume constitute the cost measures. A detailed design and analysis of a two-dimensional (2-D) optoelectronic shuffle-exchange routing network with variable grain size K is presented. The architecture permits the conventional 2 x 2 switches or grains to be generalized to larger K x K grain sizes by replacing optical interconnects with electronic wires without affecting the functionality of the system. Thus the system consists of log(k) N optoelectronic stages interconnected with free-space K-shuffles. When K = N, the MIN consists of a single electronic stage with optical input-output. The system design use an effi ient 2-D VLSI layout and a single diffractive optical element between stages to provide the 2-D K-shuffle interconnection. Results indicate that there is an optimum range of grain sizes that provides the best performance per cost. For the specific VLSI/GaAs multiple quantum well technology and system architecture considered, grain sizes larger than 256 x 256 result in a reduced performance, while grain sizes smaller than 16 x 16 have a high cost. For a network with 4096 channels, the useful range of grain sizes corresponds to approximately 250-400 electronic transistors per optical input-output channel. The effect of varying certain technology parameters such as the number of hologram phase levels, the modulator driving voltage, the minimum detectable power, and VLSI minimum feature size on the optimum grain-size system is studied. For instance, results show that using four phase levels for the interconnection hologram is a good compromise for the cost functions mentioned above. As VLSI minimum feature sizes decrease, the optimum grain size increases, whereas, if optical interconnect performance in terms of the detector power or modulator driving voltage requirements improves, the optimum grain size may be reduced. Finally, several architectural modifications to the system, such as K x K contention-free switches and sorting networks, are investigated and optimized for grain size. Results indicate that system bandwidth can be increased, but at the price of reduced performance/cost. The optoelectronic MIN architectures considered thus provide a broad range of performance/cost alternatives and offer a superior performance over purely electronic MIN's.
Category Induction via Distributional Analysis: Evidence from a Serial Reaction Time Task
ERIC Educational Resources Information Center
Hunt, Ruskin H.; Aslin, Richard N.
2010-01-01
Category formation lies at the heart of a number of higher-order behaviors, including language. We assessed the ability of human adults to learn, from distributional information alone, categories embedded in a sequence of input stimuli using a serial reaction time task. Artificial grammars generated corpora of input strings containing a…
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
A distributed approach to the OPF problem
NASA Astrophysics Data System (ADS)
Erseghe, Tomaso
2015-12-01
This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.
A Neural Network Aero Design System for Advanced Turbo-Engines
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1999-01-01
An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a neural network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. The neural network technique works well not only as an interpolating device but also as an extrapolating device to achieve blade designs from a given database. Two validating test cases are discussed.
NASA Astrophysics Data System (ADS)
Uenzelmann-Neben, Gabriele; Gohl, Karsten
2014-09-01
The distribution and internal architecture of seismostratigraphic sequences observed on the Antarctic continental slope and rise are results of sediment transport and deposition by bottom currents and ice sheets. Analysis of seismic reflection data allows to reconstruct sediment input and sediment transport patterns and to infer past changes in climate and oceanography. We observe four seismostratigraphic units which show distinct differences in location and shape of their depocentres and which accumulated at variable sedimentation rates. We used an age-depth model based on DSDP Leg 35 Site 324 for the Plio/Pleistocene and a correlation with seismic reflection characteristics from the Ross and Bellingshausen Seas, which unfortunately has large uncertainties. For the period before 21 Ma, we interpret low energy input of detritus via a palaeo-delta originating in an area of the Amundsen Sea shelf, where a palaeo-ice stream trough (Pine Island Trough East, PITE) is located today, and deposition of this material on the continental rise under sea ice coverage. For the period 21-14.1 Ma we postulate glacial erosion for the hinterland of this part of West Antarctica, which resulted in a larger depocentre and an increase in mass transport deposits. Warming during the Mid Miocene Climatic Optimum resulted in a polythermal ice sheet and led to a higher sediment supply along a broad front but with a focus via two palaeo-ice stream troughs, PITE and Abbot Trough (AT). Most of the glaciogenic debris was transported onto the eastern Amundsen Sea rise where it was shaped into levee-drifts by a re-circulating bottom current. A reduced sediment accumulation in the deep-sea subsequent to the onset of climatic cooling after 14 Ma indicates a reduced sediment supply probably in response to a colder and drier ice sheet. A dynamic ice sheet since 4 Ma delivered material offshore mainly via AT and Pine Island Trough West (PITW). Interaction of this glaciogenic detritus with a west-setting bottom current resulted in the continued formation of levee-drifts in the eastern and central Amundsen Sea.
WINGDES2 - WING DESIGN AND ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1994-01-01
This program provides a wing design algorithm based on modified linear theory which takes into account the effects of attainable leading-edge thrust. A primary objective of the WINGDES2 approach is the generation of a camber surface as mild as possible to produce drag levels comparable to those attainable with full theoretical leading-edge thrust. WINGDES2 provides both an analysis and a design capability and is applicable to both subsonic and supersonic flow. The optimization can be carried out for designated wing portions such as leading and trailing edge areas for the design of mission-adaptive surfaces, or for an entire planform such as a supersonic transport wing. This program replaces an earlier wing design code, LAR-13315, designated WINGDES. WINGDES2 incorporates modifications to improve numerical accuracy and provides additional capabilities. A means of accounting for the presence of interference pressure fields from airplane components other than the wing and a direct process for selection of flap surfaces to approach the performance levels of the optimized wing surfaces are included. An increased storage capacity allows better numerical representation of those configurations that have small chord leading-edge or trailing-edge design areas. WINGDES2 determines an optimum combination of a series of candidate surfaces rather than the more commonly used candidate loadings. The objective of the design is the recovery of unrealized theoretical leading-edge thrust of the input flat surface by shaping of the design surface to create a distributed thrust and thus minimize drag. The input consists of airfoil section thickness data, leading and trailing edge planform geometry, and operational parameters such as Mach number, Reynolds number, and design lift coefficient. Output includes optimized camber surface ordinates, pressure coefficient distributions, and theoretical aerodynamic characteristics. WINGDES2 is written in FORTRAN V for batch execution and has been implemented on a CDC CYBER computer operating under NOS 2.7.1 with a central memory requirement of approximately 344K (octal) of 60 bit words. This program was developed in 1984, and last updated in 1990. CDC and CYBER are trademarks of Control Data Corporation.
Sahib, Mouayad A.; Gambardella, Luca M.; Afzal, Wasif; Zamli, Kamal Z.
2016-01-01
Combinatorial test design is a plan of test that aims to reduce the amount of test cases systematically by choosing a subset of the test cases based on the combination of input variables. The subset covers all possible combinations of a given strength and hence tries to match the effectiveness of the exhaustive set. This mechanism of reduction has been used successfully in software testing research with t-way testing (where t indicates the interaction strength of combinations). Potentially, other systems may exhibit many similarities with this approach. Hence, it could form an emerging application in different areas of research due to its usefulness. To this end, more recently it has been applied in a few research areas successfully. In this paper, we explore the applicability of combinatorial test design technique for Fractional Order (FO), Proportional-Integral-Derivative (PID) parameter design controller, named as FOPID, for an automatic voltage regulator (AVR) system. Throughout the paper, we justify this new application theoretically and practically through simulations. In addition, we report on first experiments indicating its practical use in this field. We design different algorithms and adapted other strategies to cover all the combinations with an optimum and effective test set. Our findings indicate that combinatorial test design can find the combinations that lead to optimum design. Besides this, we also found that by increasing the strength of combination, we can approach to the optimum design in a way that with only 4-way combinatorial set, we can get the effectiveness of an exhaustive test set. This significantly reduced the number of tests needed and thus leads to an approach that optimizes design of parameters quickly. PMID:27829025
Khan, Mohammad Jakir Hossain; Hussain, Mohd Azlan; Mujtaba, Iqbal Mohammed
2014-01-01
Propylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75°C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR). PMID:28788576
NASA Astrophysics Data System (ADS)
Moon, Chang-Uk; Choi, Kwang-Hwan; Yoon, Jung-In; Kim, Young-Bok; Son, Chang-Hyo; Ha, Soo-Jung; Jeon, Min-Ju; An, Sang-Young; Lee, Joon-Hyuk
2018-04-01
In this study, to investigate the performance characteristics of vapor injection refrigeration system with an economizer at an intermediate pressure, the vapor injection refrigeration system was analyzed under various experiment conditions. As a result, the optimum design data of the vapor injection refrigeration system with an economizer were obtained. The findings from this study can be summarized as follows. The mass flow rate through the compressor increases with intermediate pressure. The compression power input showed an increasing trend under all the test conditions. The evaporation capacity increased and then decreased at the intermediate pressure, and as such, it became maximum at the given intermediate pressure. The increased mass flow rate of the by-passed refrigerant enhanced the evaporation capacity at the low medium pressure range, but the increased saturation temperature limited the subcooling degree of the liquid refrigerant after the application of the economizer when the intermediate pressure kept rising, and degenerated the evaporation capacity. The coefficient of performance (COP) increased and then decreased with respect to the intermediate pressures under all the experiment conditions. Nevertheless, there was an optimum intermediate pressure for the maximum COP under each experiment condition. Therefore, the optimum intermediate pressure in this study was found at -99.08 kPa, which is the theoretical standard medium pressure under all the test conditions.
NASA Technical Reports Server (NTRS)
Hajela, P.; Chen, J. L.
1986-01-01
The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
NASA Technical Reports Server (NTRS)
Seetharam, H. C.; Wentz, W. H., Jr.
1977-01-01
Measurements of flow fields with low speed turbulent boundary layers were made for the GA(W)-1 airfoil with a 0.30 c Fowler flap deflected 40 deg at angles of attack of 2.7 deg, 7.7 deg, and 12.8 deg, at a Reynolds number of 2.2 million, and a Mach number of 0.13. Details of velocity and pressure fields associated with the airfoil flap combination are presented for cases of narrow, optimum and wide slot gaps. Extensive flow field turbulence surveys were also conducted employing hot-film anemometry. For the optimum gap setting, the boundaries of the regions of flow reversal within the wake were determined by this technique for two angles of attack. Local skin friction distributions for the basic airfoil and the airfoil with flap (optimum gap) were obtained using the razor blade technique.
Ma, Haotong; Liu, Zejin; Jiang, Pengzhi; Xu, Xiaojun; Du, Shaojun
2011-07-04
We propose and demonstrate the improvement of conventional Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size. Based on the detailed study of the refractive beam shaping system, we found that the conventional Galilean beam shaper can only work well for the magnifying beam shaping. Taking the transformation of input beam with Gaussian irradiance distribution into target beam with high order Fermi-Dirac flattop profile as an example, the shaper can only work well at the condition that the size of input and target beam meets R(0) ≥ 1.3 w(0). For the improvement, the shaper is regarded as the combination of magnifying and demagnifying beam shaping system. The surface and phase distributions of the improved Galilean beam shaping system are derived based on Geometric and Fourier Optics. By using the improved Galilean beam shaper, the accurate transformation of input beam with Gaussian irradiance distribution into target beam with flattop irradiance distribution is realized. The irradiance distribution of the output beam is coincident with that of the target beam and the corresponding phase distribution is maintained. The propagation performance of the output beam is greatly improved. Studies of the influences of beam size and beam order on the improved Galilean beam shaping system show that restriction of beam size has been greatly reduced. This improvement can also be used to redistribute the input beam with complicated irradiance distribution into output beam with complicated irradiance distribution.
Evaluation of WRF Parameterizations for Air Quality Applications over the Midwest USA
NASA Astrophysics Data System (ADS)
Zheng, Z.; Fu, K.; Balasubramanian, S.; Koloutsou-Vakakis, S.; McFarland, D. M.; Rood, M. J.
2017-12-01
Reliable predictions from Chemical Transport Models (CTMs) for air quality research require accurate gridded weather inputs. In this study, a sensitivity analysis of 17 Weather Research and Forecast (WRF) model runs was conducted to explore the optimum configuration in six physics categories (i.e., cumulus, surface layer, microphysics, land surface model, planetary boundary layer, and longwave/shortwave radiation) for the Midwest USA. WRF runs were initally conducted over four days in May 2011 for a 12 km x 12 km domain over contiguous USA and a nested 4 km x 4 km domain over the Midwest USA (i.e., Illinois and adjacent areas including Iowa, Indiana, and Missouri). Model outputs were evaluated statistically by comparison with meteorological observations (DS337.0, METAR data, and the Water and Atmospheric Resources Monitoring Network) and resulting statistics were compared to benchmark values from the literature. Identified optimum configurations of physics parametrizations were then evaluated for the whole months of May and October 2011 to evaluate WRF model performance for Midwestern spring and fall seasons. This study demonstrated that for the chosen physics options, WRF predicted well temperature (Index of Agreement (IOA) = 0.99), pressure (IOA = 0.99), relative humidity (IOA = 0.93), wind speed (IOA = 0.85), and wind direction (IOA = 0.97). However, WRF did not predict daily precipitation satisfactorily (IOA = 0.16). Developed gridded weather fields will be used as inputs to a CTM ensemble consisting of the Comprehensive Air Quality Model with Extensions to study impacts of chemical fertilizer usage on regional air quality in the Midwest USA.
NASA Astrophysics Data System (ADS)
Ma, Zheshu; Wu, Jieer
2011-08-01
Indirectly or externally fired gas turbines (IFGT or EFGT) are interesting technologies under development for small and medium scale combined heat and power (CHP) supplies in combination with micro gas turbine technologies. The emphasis is primarily on the utilization of the waste heat from the turbine in a recuperative process and the possibility of burning biomass even "dirty" fuel by employing a high temperature heat exchanger (HTHE) to avoid the combustion gases passing through the turbine. In this paper, finite time thermodynamics is employed in the performance analysis of a class of irreversible closed IFGT cycles coupled to variable temperature heat reservoirs. Based on the derived analytical formulae for the dimensionless power output and efficiency, the efficiency optimization is performed in two aspects. The first is to search the optimum heat conductance distribution corresponding to the efficiency optimization among the hot- and cold-side of the heat reservoirs and the high temperature heat exchangers for a fixed total heat exchanger inventory. The second is to search the optimum thermal capacitance rate matching corresponding to the maximum efficiency between the working fluid and the high-temperature heat reservoir for a fixed ratio of the thermal capacitance rates of the two heat reservoirs. The influences of some design parameters on the optimum heat conductance distribution, the optimum thermal capacitance rate matching and the maximum power output, which include the inlet temperature ratio of the two heat reservoirs, the efficiencies of the compressor and the gas turbine, and the total pressure recovery coefficient, are provided by numerical examples. The power plant configuration under optimized operation condition leads to a smaller size, including the compressor, turbine, two heat reservoirs and the HTHE.
Common input to motor units of intrinsic and extrinsic hand muscles during two-digit object hold.
Winges, Sara A; Kornatz, Kurt W; Santello, Marco
2008-03-01
Anatomical and physiological evidence suggests that common input to motor neurons of hand muscles is an important neural mechanism for hand control. To gain insight into the synaptic input underlying the coordination of hand muscles, significant effort has been devoted to describing the distribution of common input across motor units of extrinsic muscles. Much less is known, however, about the distribution of common input to motor units belonging to different intrinsic muscles and to intrinsic-extrinsic muscle pairs. To address this void in the literature, we quantified the incidence and strength of near-simultaneous discharges of motor units residing in either the same or different intrinsic hand muscles (m. first dorsal, FDI, and m. first palmar interosseus, FPI) during two-digit object hold. To extend the characterization of common input to pairs of extrinsic muscles (previous work) and pairs of intrinsic muscles (present work), we also recorded electromyographic (EMG) activity from an extrinsic thumb muscle (m. flexor pollicis longus, FPL). Motor-unit synchrony across FDI and FPI was weak (common input strength, CIS, mean +/- SE: 0.17 +/- 0.02). Similarly, motor units from extrinsic-intrinsic muscle pairs were characterized by weak synchrony (FPL-FDI: 0.25 +/- 0.02; FPL-FPI: 0.29 +/- 0.03) although stronger than FDI-FPI. Last, CIS from within FDI and FPI was more than three times stronger (0.70 +/- 0.06 and 0.66 +/- 0.06, respectively) than across these muscles. We discuss present and previous findings within the framework of muscle-pair specific distribution of common input to hand muscles based on their functional role in grasping.
Lin, Risa J; Jaeger, Dieter
2011-05-01
In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Determination of neutron flux distribution in an Am-Be irradiator using the MCNP.
Shtejer-Diaz, K; Zamboni, C B; Zahn, G S; Zevallos-Chávez, J Y
2003-10-01
A neutron irradiator has been assembled at IPEN facilities to perform qualitative-quantitative analysis of many materials using thermal and fast neutrons outside the nuclear reactor premises. To establish the prototype specifications, the neutron flux distribution and the absorbed dose rates were calculated using the MCNP computer code. These theoretical predictions then allow one to discuss the optimum irradiator design and its performance.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Fate of return activated sludge after ozonation: an optimization study for sludge disintegration.
Demir, Ozlem; Filibeli, Ayse
2012-09-01
The effects of ozonation on sludge disintegration should be investigated before the application of ozone during biological treatment, in order to minimize excess sludge production. In this study, changes in sludge and supernatant after ozonation of return activated sludge were investigated for seven different ozone doses. The optimum ozone dose to avoid inhibition of ozonation and high ozone cost was determined in terms of disintegration degree as 0.05 g O3/gTS. Suspended solid and volatile suspended solid concentrations of sludge decreased by 77.8% and 71.6%, respectively, at the optimum ozone dose. Ozonation significantly decomposed sludge flocs. The release of cell contents was proved by the increase of supernatant total nitrogen (TN) and phosphorus (TP). While TN increased from 7 mg/L to 151 mg/L, TP increased from 8.8 to 33 mg/L at the optimum ozone dose. The dewaterability and filterability characteristics of the ozonated sludge were also examined. Capillary suction time increased with increasing ozone dosage, but specific resistance to filtration increased to a specific value and then decreased dramatically. The particle size distribution changed significantly as a result of floc disruption at an optimum dose of 0.05 gO3/gTS.
Multiple response optimization for higher dimensions in factors and responses
Lu, Lu; Chapman, Jessica L.; Anderson-Cook, Christine M.
2016-07-19
When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphicalmore » summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. As a result, recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions.« less
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
Characterization of Radial Curved Fin Heat Sink under Natural and Forced Convection
NASA Astrophysics Data System (ADS)
Khadke, Rishikesh; Bhole, Kiran
2018-02-01
Heat exchangers are important structures widely used in power plants, food industries, refrigeration, and air conditioners and now widely used in computing systems. Finned type of heat sink is widely used in computing systems. The main aim of the design of the heat sink is to maintain the optimum temperature level. To achieve this goal so many geometrical configurations are implemented. This paper presents a characterization of radially curved fin heat sink under natural and forced convection. Forced convection is studied for the optimization of temperature for better efficiency. The different alternatives in geometry are considered in characterization are heat intensity, the height of the fin and speed of the fan. By recognizing these alternatives the heat sink is characterized by the heat flux usually generated in high-end PCs. The temperature drop characteristics across height and radial direction are presented for the constant heat input and air flow in the heat sink. The effect of dimensionless elevation height (0 ≤ Z* ≤ 1) and Elenbaas Number (0.4 ≤ El ≤ 2.8) of the heat sink were investigated for study of the Nusselt number. Based on experimental characterization, process plan has been developed for the selection of the similar heat sinks for desired output (heat dissipation and temperature distribution).
Laser Powder Cladding of Ti-6Al-4V α/β Alloy
Al-Sayed Ali, Samar Reda; Hussein, Abdel Hamid Ahmed; Nofal, Adel Abdel Menam Saleh; Elgazzar, Haytham Abdelrafea; Sabour, Hassan Abdel
2017-01-01
Laser cladding process was performed on a commercial Ti-6Al-4V (α + β) titanium alloy by means of tungsten carbide-nickel based alloy powder blend. Nd:YAG laser with a 2.2-KW continuous wave was used with coaxial jet nozzle coupled with a standard powder feeding system. Four-track deposition of a blended powder consisting of 60 wt % tungsten carbide (WC) and 40 wt % NiCrBSi was successfully made on the alloy. The high content of the hard WC particles is intended to enhance the abrasion resistance of the titanium alloy. The goal was to create a uniform distribution of hard WC particles that is crack-free and nonporous to enhance the wear resistance of such alloy. This was achieved by changing the laser cladding parameters to reach the optimum conditions for favorable mechanical properties. The laser cladding samples were subjected to thorough microstructure examinations, microhardness and abrasion tests. Phase identification was obtained by X-ray diffraction (XRD). The obtained results revealed that the best clad layers were achieved at a specific heat input value of 59.5 J·mm−2. An increase by more than three folds in the microhardness values of the clad layers was achieved and the wear resistance was improved by values reaching 400 times. PMID:29036935
Laser Powder Cladding of Ti-6Al-4V α/β Alloy.
Al-Sayed Ali, Samar Reda; Hussein, Abdel Hamid Ahmed; Nofal, Adel Abdel Menam Saleh; Hasseb Elnaby, Salah Elden Ibrahim; Elgazzar, Haytham Abdelrafea; Sabour, Hassan Abdel
2017-10-15
Laser cladding process was performed on a commercial Ti-6Al-4V (α + β) titanium alloy by means of tungsten carbide-nickel based alloy powder blend. Nd:YAG laser with a 2.2-KW continuous wave was used with coaxial jet nozzle coupled with a standard powder feeding system. Four-track deposition of a blended powder consisting of 60 wt % tungsten carbide (WC) and 40 wt % NiCrBSi was successfully made on the alloy. The high content of the hard WC particles is intended to enhance the abrasion resistance of the titanium alloy. The goal was to create a uniform distribution of hard WC particles that is crack-free and nonporous to enhance the wear resistance of such alloy. This was achieved by changing the laser cladding parameters to reach the optimum conditions for favorable mechanical properties. The laser cladding samples were subjected to thorough microstructure examinations, microhardness and abrasion tests. Phase identification was obtained by X-ray diffraction (XRD). The obtained results revealed that the best clad layers were achieved at a specific heat input value of 59.5 J·mm -2 . An increase by more than three folds in the microhardness values of the clad layers was achieved and the wear resistance was improved by values reaching 400 times.
NONLINEAR FORCE-FREE FIELD MODELING OF A SOLAR ACTIVE REGION USING SDO/HMI AND SOLIS/VSM DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thalmann, J. K.; Wiegelmann, T.; Pietarila, A.
2012-08-15
We use SDO/HMI and SOLIS/VSM photospheric magnetic field measurements to model the force-free coronal field above a solar active region, assuming magnetic forces dominate. We take measurement uncertainties caused by, e.g., noise and the particular inversion technique, into account. After searching for the optimum modeling parameters for the particular data sets, we compare the resulting nonlinear force-free model fields. We show the degree of agreement of the coronal field reconstructions from the different data sources by comparing the relative free energy content, the vertical distribution of the magnetic pressure, and the vertically integrated current density. Though the longitudinal and transversemore » magnetic flux measured by the VSM and HMI is clearly different, we find considerable similarities in the modeled fields. This indicates the robustness of the algorithm we use to calculate the nonlinear force-free fields against differences and deficiencies of the photospheric vector maps used as an input. We also depict how much the absolute values of the total force-free, virial, and the free magnetic energy differ and how the orientation of the longitudinal and transverse components of the HMI- and VSM-based model volumes compare to each other.« less
High linearity current communicating passive mixer employing a simple resistor bias
NASA Astrophysics Data System (ADS)
Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan
2013-03-01
A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
Blind adaptive equalization of polarization-switched QPSK modulation.
Millar, David S; Savory, Seb J
2011-04-25
Coherent detection in combination with digital signal processing has recently enabled significant progress in the capacity of optical communications systems. This improvement has enabled detection of optimum constellations for optical signals in four dimensions. In this paper, we propose and investigate an algorithm for the blind adaptive equalization of one such modulation format: polarization-switched quaternary phase shift keying (PS-QPSK). The proposed algorithm, which includes both blind initialization and adaptation of the equalizer, is found to be insensitive to the input polarization state and demonstrates highly robust convergence in the presence of PDL, DGD and polarization rotation.
A versatile computer package for mechanism analysis, part 2: Dynamics and balance
NASA Astrophysics Data System (ADS)
Davies, T.
The algorithms required for the shaking force components, the shaking moment about the crankshaft axis, and the input torque and bearing load components are discussed using the textile machine as a focus for the discussion. The example is also used to provide illustrations of the output for options on the hodograph of the shaking force vector. This provides estimates of the optimum contrarotating masses and their locations for a generalized primary Lanchester balancer. The suitability of generalized Lanchester balancers particularly for textile machinery, and the overall strategy used during the development of the package are outlined.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Er3+-doped BaY2F8 crystal waveguides for broadband optical amplification at 1.5 μm
NASA Astrophysics Data System (ADS)
Toccafondo, V.; Cerqueira S., A.; Faralli, S.; Sani, E.; Toncelli, A.; Tonelli, M.; Di Pasquale, F.
2007-01-01
Integrated waveguide amplifiers based on high concentration Er3+ doped BaY2F8 crystals are numerically studied by combining a full-vectorial finite element based modal analysis and propagation-rate equations. Using realistic input data, such as the absorption/emission cross sections and Er level lifetimes measured on grown crystal samples, we investigate the amplifier performance by optimizing the total Er concentration. We predict optimum gain coefficient up to 5dB/cm and broad amplification bandwidth exceeding 80nm with 1480nm pumping.
Hu, Xue Qiong; Xu, Meng Ying; He, Yu Qin; Zhang, Ming da; Ji, Wen Juan; Zhu, Yong
2016-04-22
The climatic suitability distribution of flue-cured tobacco planting in Yunnan will be profoundly affected by climate change. According to three key factors influencing climatic suitability of flue-cured tobacco planting in Yunnan, namely, average temperature in July, sunshine duration from July to August, precipitation from April to September, the variations of climatic suitability distribution of flue-cured tobacco planting in Yunnan respectively in 1986-2005, 2021-2040 and 2041-2060 under RCP4.5 and RCP8.5 climate scenarios were investigated by using the climatic simulation data in 1981-2060 and the meteorological observation data during 1986-2005. The results showed that climatic suitability region would expand northward and eastward and plantable area of flue-cured tobacco would gradually increase. The increment of plantable area was more in 2041-2060 than in 2021-2040, and under RCP8.5 scenario than under RCP4.5 scenario. The optimum climatic area and sub-suitable climatic area were expanded considerably, while the suitable climatic area was not much changed. In the future, the north-central Yunnan such as Kunming, Qujing, Dali, Chuxiong, Lijiang would have a big increase in both the optimum climatic area and the cultivable area, meanwhile, the southern Yunnan including Wenshan, Honghe, Puer and Xishuangbanna would have a big decrease in both the optimum climatic area and the cultivable area.
Parameter optimization and stretch enhancement of AISI 316 sheet using rapid prototyping technique
NASA Astrophysics Data System (ADS)
Moayedfar, M.; Rani, A. M.; Hanaei, H.; Ahmad, A.; Tale, A.
2017-10-01
Incremental sheet forming is a flexible manufacturing process which uses the indenter point-to-point force to shape the sheet metal workpiece into manufactured parts in batch production series. However, the problem sometimes arising from this process is the low plastic point in the stress-strain diagram of the material which leads the low stretching amount before ultra-tensile strain point. Hence, a set of experiments is designed to find the optimum forming parameters in this process for optimum sheet thickness distribution while both sides of the sheet are considered for the surface quality improvement. A five-axis high-speed CNC milling machine is employed to deliver the proper motion based on the programming system while the clamping system for holding the sheet metal was a blank mould. Finally, an electron microscope and roughness machine are utilized to evaluate the surface structure of final parts, illustrate any defect may cause during the forming process and examine the roughness of the final part surface accordingly. The best interaction between parameters is obtained with the optimum values which lead the maximum sheet thickness distribution of 4.211e-01 logarithmic elongation when the depth was 24mm with respect to the design. This study demonstrates that this rapid forming method offers an alternative solution for surface quality improvement of 65% avoiding the low probability of cracks and low probability of crystal structure changes.
Kyriacou, Andreas; Li Kam Wa, Matthew E; Pabari, Punam A; Unsworth, Beth; Baruah, Resham; Willson, Keith; Peters, Nicholas S; Kanagaratnam, Prapa; Hughes, Alun D; Mayet, Jamil; Whinnett, Zachary I; Francis, Darrel P
2013-08-10
In atrial fibrillation (AF), VV optimization of biventricular pacemakers can be examined in isolation. We used this approach to evaluate internal validity of three VV optimization methods by three criteria. Twenty patients (16 men, age 75 ± 7) in AF were optimized, at two paced heart rates, by LVOT VTI (flow), non-invasive arterial pressure, and ECG (minimizing QRS duration). Each optimization method was evaluated for: singularity (unique peak of function), reproducibility of optimum, and biological plausibility of the distribution of optima. The reproducibility (standard deviation of the difference, SDD) of the optimal VV delay was 10 ms for pressure, versus 8 ms (p=ns) for QRS and 34 ms (p<0.01) for flow. Singularity of optimum was 85% for pressure, 63% for ECG and 45% for flow (Chi(2)=10.9, p<0.005). The distribution of pressure optima was biologically plausible, with 80% LV pre-excited (p=0.007). The distributions of ECG (55% LV pre-excitation) and flow (45% LV pre-excitation) optima were no different to random (p=ns). The pressure-derived optimal VV delay is unaffected by the paced rate: SDD between slow and fast heart rate is 9 ms, no different from the reproducibility SDD at both heart rates. Using non-invasive arterial pressure, VV delay optimization by parabolic fitting is achievable with good precision, satisfying all 3 criteria of internal validity. VV optimum is unaffected by heart rate. Neither QRS minimization nor LVOT VTI satisfy all validity criteria, and therefore seem weaker candidate modalities for VV optimization. AF, unlinking interventricular from atrioventricular delay, uniquely exposes resynchronization concepts to experimental scrutiny. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Seifali Abbas-Abadi, Mehrdad
2017-01-01
In the previous studies, the several halocarbons (HC) were tested as promoters for a Ti-based Ziegler-Natta (ZN) catalyst at different polymerization conditions. The Results showed that chloro cyclohexane has the best operation in catalyst activity, polymer particle size growth, hydrogen responsibility and wax reduction too. For the first time in this study, the effect of Al/Ti ratio on the optimum HC/Ti ratio has been considered and the results showed that the optimum HC/Ti ratio depends on the Al/Ti ratio directly. In the optimum HC/Ti ratio, the catalyst activity and hydrogen responsibility ratio of the catalyst increase up to 125 and 55% respectively. The acceptable growth of polymer powder up to 46%, lower flow rate ratio (FRR) up to 19% and decrease of wax amount up to 12%, completed the promotion results. Furthermore, in the next part of this study and as key note, a little dose of halocarbon was used in the catalyst preparation to produce the special catalysts with dual active sites. In the catalyst preparation, the concentration of each active sites depends on the halocarbon amount and it can control the molecular weight distribution of the produced polyethylene; because each active sites have different response to hydrogen. The halocarbon based catalysts showed the remarkable effect on the catalyst activity, the molecular weight and especially molecular weight distribution (MWD). The flow rate ratio and MWD could be increased up to 77 and 88% respectively as the main result of halocarbon addition during the catalyst preparation.
Seifali Abbas-Abadi, Mehrdad
2017-01-01
Abstract In the previous studies, the several halocarbons (HC) were tested as promoters for a Ti-based Ziegler–Natta (ZN) catalyst at different polymerization conditions. The Results showed that chloro cyclohexane has the best operation in catalyst activity, polymer particle size growth, hydrogen responsibility and wax reduction too. For the first time in this study, the effect of Al/Ti ratio on the optimum HC/Ti ratio has been considered and the results showed that the optimum HC/Ti ratio depends on the Al/Ti ratio directly. In the optimum HC/Ti ratio, the catalyst activity and hydrogen responsibility ratio of the catalyst increase up to 125 and 55% respectively. The acceptable growth of polymer powder up to 46%, lower flow rate ratio (FRR) up to 19% and decrease of wax amount up to 12%, completed the promotion results. Furthermore, in the next part of this study and as key note, a little dose of halocarbon was used in the catalyst preparation to produce the special catalysts with dual active sites. In the catalyst preparation, the concentration of each active sites depends on the halocarbon amount and it can control the molecular weight distribution of the produced polyethylene; because each active sites have different response to hydrogen. The halocarbon based catalysts showed the remarkable effect on the catalyst activity, the molecular weight and especially molecular weight distribution (MWD). The flow rate ratio and MWD could be increased up to 77 and 88% respectively as the main result of halocarbon addition during the catalyst preparation. PMID:29491824
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
Standard Transistor Array (STAR). Volume 1: Placement technique
NASA Technical Reports Server (NTRS)
Cox, G. W.; Caroll, B. D.
1979-01-01
A large scale integration (LSI) technology, the standard transistor array uses a prefabricated understructure of transistors and a comprehensive library of digital logic cells to allow efficient fabrication of semicustom digital LSI circuits. The cell placement technique for this technology involves formation of a one dimensional cell layout and "folding" of the one dimensional placement onto the chip. It was found that, by use of various folding methods, high quality chip layouts can be achieved. Methods developed to measure of the "goodness" of the generated placements include efficient means for estimating channel usage requirements and for via counting. The placement and rating techniques were incorporated into a placement program (CAPSTAR). By means of repetitive use of the folding methods and simple placement improvement strategies, this program provides near optimum placements in a reasonable amount of time. The program was tested on several typical LSI circuits to provide performance comparisons both with respect to input parameters and with respect to the performance of other placement techniques. The results of this testing indicate that near optimum placements can be achieved by use of the procedures incurring severe time penalties.
NASA Astrophysics Data System (ADS)
Dadbakhsh, Sasan; Verbelen, Leander; Vandeputte, Tom; Strobbe, Dieter; Van Puyvelde, Peter; Kruth, Jean-Pierre
This work investigates the influence of powder size/shape on selective laser sintering (SLS) of a thermoplastic polyurethane (TPU) elastomer. It examines a TPU powder which had been cryogenically milled in two different sizes; coarse powder (D50∼200μm) with rough surfaces in comparison with a fine powder (D50∼63μm) with extremely fine flow additives. It is found that the coarse powder coalesces at lower temperatures and excessively smokes during the SLS processing. In comparison, the fine powder with flow additives is better processable at significantly higher powder bed temperatures, allowing a lower optimum laser energy input which minimizes smoking and degradation of the polymer. In terms of mechanical properties, good coalescence of both powders lead to parts with acceptable shear-punch strengths compared to injection molded parts. However, porosity and degradation from the optimum SLS parameters of the coarse powder drastically reduce the tensile properties to about one-third of the parts made from the fine powders as well as those made by injection molding (IM).
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
Pervasive Radio Mapping of Industrial Environments Using a Virtual Reality Approach
Nedelcu, Adrian-Valentin; Machedon-Pisu, Mihai; Talaba, Doru
2015-01-01
Wireless communications in industrial environments are seriously affected by reliability and performance issues, due to the multipath nature of obstacles within such environments. Special attention needs to be given to planning a wireless industrial network, so as to find the optimum spatial position for each of the nodes within the network, and especially for key nodes such as gateways or cluster heads. The aim of this paper is to present a pervasive radio mapping system which captures (senses) data regarding the radio spectrum, using low-cost wireless sensor nodes. This data is the input of radio mapping algorithms that generate electromagnetic propagation profiles. Such profiles are used for identifying obstacles within the environment and optimum propagation pathways. With the purpose of further optimizing the radio planning process, the authors propose a novel human-network interaction (HNI) paradigm that uses 3D virtual environments in order to display the radio maps in a natural, easy-to-perceive manner. The results of this approach illustrate its added value to the field of radio resource planning of industrial communication systems. PMID:26167533
Disc piezoelectric ceramic transformers.
Erhart, Jirií; Půlpán, Petr; Doleček, Roman; Psota, Pavel; Lédl, Vít
2013-08-01
In this contribution, we present our study on disc-shaped and homogeneously poled piezoelectric ceramic transformers working in planar-extensional vibration modes. Transformers are designed with electrodes divided into wedge, axisymmetrical ring-dot, moonie, smile, or yin-yang segments. Transformation ratio, efficiency, and input and output impedances were measured for low-power signals. Transformer efficiency and transformation ratio were measured as a function of frequency and impedance load in the secondary circuit. Optimum impedance for the maximum efficiency has been found. Maximum efficiency and no-load transformation ratio can reach almost 100% and 52 for the fundamental resonance of ring-dot transformers and 98% and 67 for the second resonance of 2-segment wedge transformers. Maximum efficiency was reached at optimum impedance, which is in the range from 500 Ω to 10 kΩ, depending on the electrode pattern and size. Fundamental vibration mode and its overtones were further studied using frequency-modulated digital holographic interferometry and by the finite element method. Complementary information has been obtained by the infrared camera visualization of surface temperature profiles at higher driving power.
Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems
NASA Astrophysics Data System (ADS)
Lang, Hans-Dieter; Sarris, Costas D.
2017-11-01
An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.
Pervasive Radio Mapping of Industrial Environments Using a Virtual Reality Approach.
Nedelcu, Adrian-Valentin; Machedon-Pisu, Mihai; Duguleana, Mihai; Talaba, Doru
2015-01-01
Wireless communications in industrial environments are seriously affected by reliability and performance issues, due to the multipath nature of obstacles within such environments. Special attention needs to be given to planning a wireless industrial network, so as to find the optimum spatial position for each of the nodes within the network, and especially for key nodes such as gateways or cluster heads. The aim of this paper is to present a pervasive radio mapping system which captures (senses) data regarding the radio spectrum, using low-cost wireless sensor nodes. This data is the input of radio mapping algorithms that generate electromagnetic propagation profiles. Such profiles are used for identifying obstacles within the environment and optimum propagation pathways. With the purpose of further optimizing the radio planning process, the authors propose a novel human-network interaction (HNI) paradigm that uses 3D virtual environments in order to display the radio maps in a natural, easy-to-perceive manner. The results of this approach illustrate its added value to the field of radio resource planning of industrial communication systems.
Optimum Water Chemistry in radiation field buildup control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chien, C.
1995-03-01
Nuclear utilities continue to face the challenGE of reducing exposure of plant maintenance personnel. GE Nuclear Energy has developed the concept of Optimum Water Chemistry (OWC) to reduce the radiation field buildup and minimize the radioactive waste production. It is believed that reduction of radioactive sources and improvement of the water chemistry quality should significantly reduce both the radiation exposure and radwaste production. The most important source of radioactivity is cobalt and replacement of cobalt containing alloy in the core region as well as in the entire primary system is considered the first priority to achieve the goal of lowmore » exposure and minimized waste production. A plant specific computerized cobalt transport model has been developed to evaluate various options in a BWR system under specific conditions. Reduction of iron input and maintaining low ionic impurities in the coolant have been identified as two major tasks for operators. Addition of depleted zinc is a proven technique to reduce Co-60 in reactor water and on out-of-core piping surfaces. The effect of HWC on Co-60 transport in the primary system will also be discussed.« less
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimum Construction of Heating Coil for Domestic Induction Cooker
NASA Astrophysics Data System (ADS)
Sinha, Dola; Bandyopadhyay, Atanu; Sadhu, Pradip Kumar; Pal, Nitai
2010-10-01
The design and optimization of the parameters of heating coil is very important for the analytical analysis of high frequency inverter fed induction cooker. Moreover, accurate prediction of high frequency winding loss (i.e., losses due to skin and proximity effects) is necessary as the induction cooker used in power electronics applications. At high frequency current penetration in the induction coil circuit is very difficult for conducting wire due to skin-effect. To eradicate the skin effect heating coil is made up of bundle conductor i.e., litz wire. In this paper inductances and AC resistances of a litz-wire are calculated and optimized by considering the input parameters like wire type, shape, number of strand, number of spiral turn, number of twist per feet of heating coil and operating frequency. A high frequency half bridge series resonant mirror inverter circuit is used in this paper and taking the optimum values of inductance and ac resistance the circuit is simulated through PSPICE simulations. It has been noticed that the results are feasible enough for real implementation.
NASA Technical Reports Server (NTRS)
Schmer, F. A. (Principal Investigator); Isakson, R. E.; Eidenshink, J. C.
1977-01-01
The author has identified the following significant results. Visual interpretation of 1:125,000 color LANDSAT prints produced timely level 1 maps of accuracies in excess of 80% for agricultural land identification. Accurate classification of agricultural land via digital analysis of LANDSAT CCT's required precise timing of the date of data collection with mid to late June optimum for western South Dakota. The LANDSAT repetitive nine day cycle over the state allowed the surface areas of stockdams and small reservoir systems to be monitored to provide a timely approximation of surface water conditions on the range. Combined use of DIRS, K-class, and LANDSAT CCT's demonstrated the ability to produce aspen maps of greater detail and timeliness than was available using US Forest Service maps. Visual temporal analyses of LANDSAT imagery improved highway map drainage information and were used to prepare a seven county drainage network. An optimum map of flood-prone areas was developed, utilizing high altitude aerial photography and USGS maps.
Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar
2015-09-01
In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.
Multivariable control theory applied to hierarchial attitude control for planetary spacecraft
NASA Technical Reports Server (NTRS)
Boland, J. S., III; Russell, D. W.
1972-01-01
Multivariable control theory is applied to the design of a hierarchial attitude control system for the CARD space vehicle. The system selected uses reaction control jets (RCJ) and control moment gyros (CMG). The RCJ system uses linear signal mixing and a no-fire region similar to that used on the Skylab program; the y-axis and z-axis systems which are coupled use a sum and difference feedback scheme. The CMG system uses the optimum steering law and the same feedback signals as the RCJ system. When both systems are active the design is such that the torques from each system are never in opposition. A state-space analysis was made of the CMG system to determine the general structure of the input matrices (steering law) and feedback matrices that will decouple the axes. It is shown that the optimum steering law and proportional-plus-rate feedback are special cases. A derivation of the disturbing torques on the space vehicle due to the motion of the on-board television camera is presented. A procedure for computing an upper bound on these torques (given the system parameters) is included.
NASA Astrophysics Data System (ADS)
Prasetya, A.; Mawadati, A.; Putri, A. M. R.; Petrus, H. T. B. M.
2018-01-01
Comminution is one of crucial steps in gold ore processing used to liberate the valuable minerals from gaunge mineral. This research is done to find the particle size distribution of gold ore after it has been treated through the comminution process in a rod mill with various number of rod and rotational speed that will results in one optimum milling condition. For the initial step, Sumbawa gold ore was crushed and then sieved to pass the 2.5 mesh and retained on the 5 mesh (this condition was taken to mimic real application in artisanal gold mining). Inserting the prepared sample into the rod mill, the observation on effect of rod-number and rotational speed was then conducted by variating the rod number of 7 and 10 while the rotational speed was varied from 60, 85, and 110 rpm. In order to be able to provide estimation on particle distribution of every condition, the comminution kinetic was applied by taking sample at 15, 30, 60, and 120 minutes for size distribution analysis. The change of particle distribution of top and bottom product as time series was then treated using Rosin-Rammler distribution equation. The result shows that the homogenity of particle size and particle size distribution is affected by rod-number and rotational speed. The particle size distribution is more homogeneous by increasing of milling time, regardless of rod-number and rotational speed. Mean size of particles do not change significantly after 60 minutes milling time. Experimental results showed that the optimum condition was achieved at rotational speed of 85 rpm, using rod-number of 7.
A Step-Wise Approach to Elicit Triangular Distributions
NASA Technical Reports Server (NTRS)
Greenberg, Marc W.
2013-01-01
Adapt/combine known methods to demonstrate an expert judgment elicitation process that: 1.Models expert's inputs as a triangular distribution, 2.Incorporates techniques to account for expert bias and 3.Is structured in a way to help justify expert's inputs. This paper will show one way of "extracting" expert opinion for estimating purposes. Nevertheless, as with most subjective methods, there are many ways to do this.
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
Selection on skewed characters and the paradox of stasis
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2018-01-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild. PMID:28921508
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trianti, Nuri, E-mail: nuri.trianti@gmail.com; Nurjanah,; Su’ud, Zaki
Thermalhydraulic of reactor core is the thermal study on fluids within the core reactor, i.e. analysis of the thermal energy transfer process produced by fission reaction from fuel to the reactor coolant. This study include of coolant temperature and reactor power density distribution. The purposes of this analysis in the design of nuclear power plant are to calculate the coolant temperature distribution and the chimney height so natural circulation could be occurred. This study was used boiling water reactor (BWR) with cylinder type reactor core. Several reactor core properties such as linear power density, mass flow rate, coolant density andmore » inlet temperature has been took into account to obtain distribution of coolant density, flow rate and pressure drop. The results of calculation are as follows. Thermal hydraulic calculations provide the uniform pressure drop of 1.1 bar for each channels. The optimum mass flow rate to obtain the uniform pressure drop is 217g/s. Furthermore, from the calculation it could be known that outlet temperature is 288°C which is the saturated fluid’s temperature within the system. The optimum chimney height for natural circulation within the system is 14.88 m.« less
NASA Astrophysics Data System (ADS)
Feng, Chenchen; Jiao, Zhengbo; Li, Shaopeng; Zhang, Yan; Bi, Yingpu
2015-12-01
We demonstrate a facile method for the rational fabrication of pore-size controlled nanoporous BiVO4 photoanodes, and confirmed that the optimum pore-size distributions could effectively absorb visible light through light diffraction and confinement functions. Furthermore, in situ X-ray photoelectron spectroscopy (XPS) reveals more efficient photoexcited electron-hole separation than conventional particle films, induced by light confinement and rapid charge transfer in the inter-crossed worm-like structures.We demonstrate a facile method for the rational fabrication of pore-size controlled nanoporous BiVO4 photoanodes, and confirmed that the optimum pore-size distributions could effectively absorb visible light through light diffraction and confinement functions. Furthermore, in situ X-ray photoelectron spectroscopy (XPS) reveals more efficient photoexcited electron-hole separation than conventional particle films, induced by light confinement and rapid charge transfer in the inter-crossed worm-like structures. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06584d
Liquid sprays and flow studies in the direct-injection diesel engine under motored conditions
NASA Technical Reports Server (NTRS)
Nguyen, Hung Lee; Carpenter, Mark H.; Ramos, Juan I.; Schock, Harold J.; Stegeman, James D.
1988-01-01
A two dimensional, implicit finite difference method of the control volume variety, a two equation model of turbulence, and a discrete droplet model were used to study the flow field, turbulence levels, fuel penetration, vaporization, and mixing in diesel engine environments. The model was also used to study the effects of engine speed, injection angle, spray cone angle, droplet distribution, and intake swirl angle on the flow field, spray penetration and vaporization, and turbulence in motored two-stroke diesel engines. It is shown that there are optimum conditions for injection, which depend on droplet distribution, swirl, spray cone angle, and injection angle. The optimum conditions result in good spray penetration and vaporization and in good fuel mixing. The calculation presented clearly indicates that internal combustion engine models can be used to assess, at least qualitatively, the effects of injection characteristics and engine operating conditions on the flow field and on the spray penetration and vaporization in diesel engines.
NASA Astrophysics Data System (ADS)
Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang
2016-04-01
This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.
NASA Astrophysics Data System (ADS)
Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi
2004-06-01
We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.
Optimum Operating Conditions for PZT Actuators for Vibrotactile Wearables
NASA Astrophysics Data System (ADS)
Logothetis, Irini; Matsouka, Dimitra; Vassiliadis, Savvas; Vossou, Clio; Siores, Elias
2018-04-01
Recently, vibrotactile wearables have received much attention in fields such as medicine, psychology, athletics and video gaming. The electrical components presently used to generate vibration are rigid; hence, the design and creation of ergonomical wearables are limited. Significant advances in piezoelectric components have led to the production of flexible actuators such as piezoceramic lead zirconate titanate (PZT) film. To verify the functionality of PZT actuators for use in vibrotactile wearables, the factors influencing the electromechanical conversion were analysed and tested. This was achieved through theoretical and experimental analyses of a monomorph clamped-free structure for the PZT actuator. The research performed for this article is a three-step process. First, a theoretical analysis presents the equations governing the actuator. In addition, the eigenfrequency of the film was analysed preceding the experimental section. For this stage, by applying an electric voltage and varying the stimulating electrical characteristics (i.e., voltage, electrical waveform and frequency), the optimum operating conditions for a PZT film were determined. The tip displacement was measured referring to the mechanical energy converted from electrical energy. From the results obtained, an equation for the mechanical behaviour of PZT films as actuators was deduced. It was observed that the square waveform generated larger tip displacements. In conjunction with large voltage inputs at the predetermined eigenfrequency, the optimum operating conditions for the actuator were achieved. To conclude, PZT films can be adapted to assist designers in creating comfortable vibrotactile wearables.
Optimum Operating Conditions for PZT Actuators for Vibrotactile Wearables
NASA Astrophysics Data System (ADS)
Logothetis, Irini; Matsouka, Dimitra; Vassiliadis, Savvas; Vossou, Clio; Siores, Elias
2018-07-01
Recently, vibrotactile wearables have received much attention in fields such as medicine, psychology, athletics and video gaming. The electrical components presently used to generate vibration are rigid; hence, the design and creation of ergonomical wearables are limited. Significant advances in piezoelectric components have led to the production of flexible actuators such as piezoceramic lead zirconate titanate (PZT) film. To verify the functionality of PZT actuators for use in vibrotactile wearables, the factors influencing the electromechanical conversion were analysed and tested. This was achieved through theoretical and experimental analyses of a monomorph clamped-free structure for the PZT actuator. The research performed for this article is a three-step process. First, a theoretical analysis presents the equations governing the actuator. In addition, the eigenfrequency of the film was analysed preceding the experimental section. For this stage, by applying an electric voltage and varying the stimulating electrical characteristics (i.e., voltage, electrical waveform and frequency), the optimum operating conditions for a PZT film were determined. The tip displacement was measured referring to the mechanical energy converted from electrical energy. From the results obtained, an equation for the mechanical behaviour of PZT films as actuators was deduced. It was observed that the square waveform generated larger tip displacements. In conjunction with large voltage inputs at the predetermined eigenfrequency, the optimum operating conditions for the actuator were achieved. To conclude, PZT films can be adapted to assist designers in creating comfortable vibrotactile wearables.
Multi-KW dc distribution system technology research study
NASA Technical Reports Server (NTRS)
Dawson, S. G.
1978-01-01
The Multi-KW DC Distribution System Technology Research Study is the third phase of the NASA/MSFC study program. The purpose of this contract was to complete the design of the integrated technology test facility, provide test planning, support test operations and evaluate test results. The subjet of this study is a continuation of this contract. The purpose of this continuation is to study and analyze high voltage system safety, to determine optimum voltage levels versus power, to identify power distribution system components which require development for higher voltage systems and finally to determine what modifications must be made to the Power Distribution System Simulator (PDSS) to demonstrate 300 Vdc distribution capability.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Social optimum for evening commute in a single-entry traffic corridor with no early departures
NASA Astrophysics Data System (ADS)
Li, Chuan-Yao; Xu, Guang-Ming; Tang, Tie-Qiao
2018-07-01
In this paper, we investigate the evening commute behaviors on the social optimum (SO) state in a single-entry traffic corridor with no early departures. Differing from the previous studies on evening commute, the dynamic properties of traffic flow are analyzed with the LWR (Lighthill-Whitham-Richards) model. The properties of optimum cumulative inflow curve with general desired departure time distribution curve are deduced, and then the analytic solutions for common desired departure time in SO are obtained. Three numerical examples are carried out to capture the characteristics of evening commuting behaviors under different values of time. The analytic and numerical results both indicate that the rarefaction wave originating from the first entry point influences the whole or part of the outflow curve. No shock wave exists through the commuting process. In addition, the cost curves show that the trip cost increases and the departure delay cost decreases with departure time, whereas the travel time cost first increases then decreases with departure time under the SO principle.
Simulation of speckle patterns with pre-defined correlation distributions.
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S
2016-03-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.
Simulation of speckle patterns with pre-defined correlation distributions
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.
2016-01-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589
New method for calculating the coupling coefficient in graded index optical fibers
NASA Astrophysics Data System (ADS)
Savović, Svetislav; Djordjevich, Alexandar
2018-05-01
A simple method is proposed for determining the mode coupling coefficient D in graded index multimode optical fibers. It only requires observation of the output modal power distribution P(m, z) for one fiber length z as the Gaussian launching modal power distribution changes, with the Gaussian input light distribution centered along the graded index optical fiber axis (θ0 = 0) without radial offset (r0 = 0). A similar method we previously proposed for calculating the coupling coefficient D in a step-index multimode optical fibers where the output angular power distributions P(θ, z) for one fiber length z with the Gaussian input light distribution launched centrally along the step-index optical fiber axis (θ0 = 0) is needed to be known.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
All-weld-metal design for AWS E10018M, E11018M and E12018M type electrodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surian, E.S.; Vedia, L.A. de
This paper presents the results of a research program conducted to design the all-weld metal deposited with AWS A5.5-81 E10018M, E11018M and E12018M SMAW-type electrodes. The role that different alloying elements such as manganese, carbon and chromium play on the tensile properties, hardness and toughness as well as on the microstructure was studied. Criteria for selecting the weld metal composition leading to optimum combination of tensile strength and toughness are suggested. The effect of the variation of heat input, within the requirements of the AWS standard, on the mentioned properties was also analyzed. It was found that the E11018M andmore » E12018M all-weld-metal tensile properties are very sensitive to variations in heat input. For certain values of chemical composition, welding parameter ranges suitable to guarantee the fulfillment of AWS requirements were determined.« less
Prediction of municipal solid waste generation using nonlinear autoregressive network.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A
2015-12-01
Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Shanthi, M; Rajesh Banu, J; Sivashanmugam, P
2018-05-15
The present study explored the disintegration potential of fruits and vegetable residue through sodium dodecyl sulphate (SDS) assisted sonic pretreatment (SSP). In SSP method, initially the biomass barrier (lignin) was removed using SDS at different dosage, subsequently it was sonically disintegrated. The effect of SSP were assessed based on dissolved organic release (DOR) of fruits and vegetable waste and specific energy input. SSP method achieved higher DOR rate and suspended solids reduction (26% and 16%) at optimum SDS dosage of 0.035 g/g SS with least specific energy input of 5400 kJ/kg TS compared to ultrasonic pretreatment (UP) (16% and 10%). The impact of fermentation and biomethane potential assay revealed highest production of volatile fatty acid and methane yield in SSP (1950 mg/L, 0.6 g/g COD) than UP. The energy ratio obtained was 0.9 for SSP, indicating proposed method is energetically efficient. Copyright © 2018 Elsevier Ltd. All rights reserved.
Allken, Vaneeda; Chepkoech, Joy-Loi; Einevoll, Gaute T; Halnes, Geir
2014-01-01
Inhibitory interneurons (INs) in the lateral geniculate nucleus (LGN) provide both axonal and dendritic GABA output to thalamocortical relay cells (TCs). Distal parts of the IN dendrites often enter into complex arrangements known as triadic synapses, where the IN dendrite plays a dual role as postsynaptic to retinal input and presynaptic to TC dendrites. Dendritic GABA release can be triggered by retinal input, in a highly localized process that is functionally isolated from the soma, but can also be triggered by somatically elicited Ca(2+)-spikes and possibly by backpropagating action potentials. Ca(2+)-spikes in INs are predominantly mediated by T-type Ca(2+)-channels (T-channels). Due to the complex nature of the dendritic signalling, the function of the IN is likely to depend critically on how T-channels are distributed over the somatodendritic membrane (T-distribution). To study the relationship between the T-distribution and several IN response properties, we here run a series of simulations where we vary the T-distribution in a multicompartmental IN model with a realistic morphology. We find that the somatic response to somatic current injection is facilitated by a high T-channel density in the soma-region. Conversely, a high T-channel density in the distal dendritic region is found to facilitate dendritic signalling in both the outward direction (increases the response in distal dendrites to somatic input) and the inward direction (the soma responds stronger to distal synaptic input). The real T-distribution is likely to reflect a compromise between several neural functions, involving somatic response patterns and dendritic signalling.
Allken, Vaneeda; Chepkoech, Joy-Loi; Einevoll, Gaute T.; Halnes, Geir
2014-01-01
Inhibitory interneurons (INs) in the lateral geniculate nucleus (LGN) provide both axonal and dendritic GABA output to thalamocortical relay cells (TCs). Distal parts of the IN dendrites often enter into complex arrangements known as triadic synapses, where the IN dendrite plays a dual role as postsynaptic to retinal input and presynaptic to TC dendrites. Dendritic GABA release can be triggered by retinal input, in a highly localized process that is functionally isolated from the soma, but can also be triggered by somatically elicited Ca2+-spikes and possibly by backpropagating action potentials. Ca2+-spikes in INs are predominantly mediated by T-type Ca2+-channels (T-channels). Due to the complex nature of the dendritic signalling, the function of the IN is likely to depend critically on how T-channels are distributed over the somatodendritic membrane (T-distribution). To study the relationship between the T-distribution and several IN response properties, we here run a series of simulations where we vary the T-distribution in a multicompartmental IN model with a realistic morphology. We find that the somatic response to somatic current injection is facilitated by a high T-channel density in the soma-region. Conversely, a high T-channel density in the distal dendritic region is found to facilitate dendritic signalling in both the outward direction (increases the response in distal dendrites to somatic input) and the inward direction (the soma responds stronger to distal synaptic input). The real T-distribution is likely to reflect a compromise between several neural functions, involving somatic response patterns and dendritic signalling. PMID:25268996
Optimum design of Geodesic dome’s jointing system
NASA Astrophysics Data System (ADS)
Tran, Huy. T.
2018-04-01
This study attempts to create a new design for joint connector of Geodesic dome. A new type of joint connector design is proposed for flexible rotating connection; comparing it to another, this design is cheaper and workable. After calculating the bearing capacity of the sample according to EC3 and Vietnam standard TCVN 5575-2012, FEM model of the design sample is carried out in many specific situation to consider the stress distribution, the deformation, the local destruction… in the connector. The analytical results and the FE data are consistent. The FE analysis also points out the behavior of some details that simple calculation cannot show. Hence, we can choose the optimum design of joint connector.
Capacity of the generalized PPM channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Klimesh, Matt; McEliece, Bob; Moision, Bruce
2004-01-01
We show the capacity of a generalized pulse-position-modulation (PPM) channel, where the input vectors may be any set that allows a transitive group of coordinate permutations, is achieved by a uniform input distribution.
Spatially Distributed Dendritic Resonance Selectively Filters Synaptic Input
Segev, Idan; Shamma, Shihab
2014-01-01
An important task performed by a neuron is the selection of relevant inputs from among thousands of synapses impinging on the dendritic tree. Synaptic plasticity enables this by strenghtening a subset of synapses that are, presumably, functionally relevant to the neuron. A different selection mechanism exploits the resonance of the dendritic membranes to preferentially filter synaptic inputs based on their temporal rates. A widely held view is that a neuron has one resonant frequency and thus can pass through one rate. Here we demonstrate through mathematical analyses and numerical simulations that dendritic resonance is inevitably a spatially distributed property; and therefore the resonance frequency varies along the dendrites, and thus endows neurons with a powerful spatiotemporal selection mechanism that is sensitive both to the dendritic location and the temporal structure of the incoming synaptic inputs. PMID:25144440
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-01-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. PMID:24945075
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Mass resolution of linear quadrupole ion traps with round rods.
Douglas, D J; Konenkov, N V
2014-11-15
Auxiliary dipole excitation is widely used to eject ions from linear radio-frequency quadrupole ion traps for mass analysis. Linear quadrupoles are often constructed with round rod electrodes. The higher multipoles introduced to the electric potential by round rods might be expected to change the ion ejection process. We have therefore investigated the optimum ratio of rod radius, r, to field radius, r0, for excitation and ejection of ions. Trajectory calculations are used to determine the excitation contour, S(q), the fraction of ions ejected when trapped at q values close to the ejection (or excitation) q. Initial conditions are randomly selected from Gaussian distributions of the x and y coordinates and a thermal distribution of velocities. The N = 6 (12 pole) and N = 10 (20 pole) multipoles are added to the quadrupole potential. Peak shapes and resolution were calculated for ratios r/r0 from 1.09 to 1.20 with an excitation time of 1000 cycles of the trapping radio-frequency. Ratios r/r0 in the range 1.140 to 1.160 give the highest resolution and peaks with little tailing. Ratios outside this range give lower resolution and peaks with tails on either the low-mass side or the high-mass side of the peaks. This contrasts with the optimum ratio of 1.126-1.130 for a quadrupole mass filter operated conventionally at the tip of the first stability region. With the optimum geometry the resolution is 2.7 times greater than with an ideal quadrupole field. Adding only a 2.0% hexapole field to a quadrupole field increases the resolution by a factor of 1.6 compared with an ideal quadrupole field. Addition of a 2.0% octopole lowers resolution and degrades peak shape. With the optimum value of r/r0 , the resolution increases with the ejection time (measured in cycles of the trapping rf, n) approximately as R0.5 = 6.64n, in contrast to a pure quadrupole field where R0.5 = 1.94n. Adding weak nonlinear fields to a quadrupole field can improve the resolution with mass-selective ejection of ions by up to a factor of 2.7. The optimum ratio r/r0 is 1.14 to 1.16, which differs from the optimum ratio for a mass filter of 1.128-1.130. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Rice, Amber; Fuglevand, Andrew J; Laine, Christopher M; Fregosi, Ralph F
2011-05-01
The respiratory central pattern generator distributes rhythmic excitatory input to phrenic, intercostal, and hypoglossal premotor neurons. The degree to which this input shapes motor neuron activity can vary across respiratory muscles and motor neuron pools. We evaluated the extent to which respiratory drive synchronizes the activation of motor unit pairs in tongue (genioglossus, hyoglossus) and chest-wall (diaphragm, external intercostals) muscles using coherence analysis. This is a frequency domain technique, which characterizes the frequency and relative strength of neural inputs that are common to each of the recorded motor units. We also examined coherence across the two tongue muscles, as our previous work shows that, despite being antagonists, they are strongly coactivated during the inspiratory phase, suggesting that excitatory input from the premotor neurons is distributed broadly throughout the hypoglossal motoneuron pool. All motor unit pairs showed highly correlated activity in the low-frequency range (1-8 Hz), reflecting the fundamental respiratory frequency and its harmonics. Coherence of motor unit pairs recorded either within or across the tongue muscles was similar, consistent with broadly distributed premotor input to the hypoglossal motoneuron pool. Interestingly, motor units from diaphragm and external intercostal muscles showed significantly higher coherence across the 10-20-Hz bandwidth than tongue-muscle units. We propose that the lower coherence in tongue-muscle motor units over this range reflects a larger constellation of presynaptic inputs, which collectively lead to a reduction in the coherence between hypoglossal motoneurons in this frequency band. This, in turn, may reflect the relative simplicity of the respiratory drive to the diaphragm and intercostal muscles, compared with the greater diversity of functions fulfilled by muscles of the tongue.
Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity
Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon
2011-01-01
Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800
Zeroth-order phase-contrast technique.
Pizolato, José Carlos; Cirino, Giuseppe Antonio; Gonçalves, Cristhiane; Neto, Luiz Gonçalves
2007-11-01
What we believe to be a new phase-contrast technique is proposed to recover intensity distributions from phase distributions modulated by spatial light modulators (SLMs) and binary diffractive optical elements (DOEs). The phase distribution is directly transformed into intensity distributions using a 4f optical correlator and an iris centered in the frequency plane as a spatial filter. No phase-changing plates or phase dielectric dots are used as a filter. This method allows the use of twisted nematic liquid-crystal televisions (LCTVs) operating in the real-time phase-mostly regime mode between 0 and p to generate high-intensity multiple beams for optical trap applications. It is also possible to use these LCTVs as input SLMs for optical correlators to obtain high-intensity Fourier transform distributions of input amplitude objects.
Millimeter-wave/infrared rectenna development at Georgia Tech
NASA Technical Reports Server (NTRS)
Gouker, Mark A.
1989-01-01
The key design issues of the Millimeter Wave/Infrared (MMW/IR) monolithic rectenna have been resolved. The work at Georgia Tech in the last year has focused on increasing the power received by the physically small MMW rectennas in order to increase the rectification efficiency. The solution to this problem is to place a focusing element on the back side of the substrate. The size of the focusing element can be adjusted to help maintain the optimum input power density not only for different power densities called for in various mission scenarios, but also for the nonuniform power density profile of a narrow EM-beam.
System identification of an unmanned quadcopter system using MRAN neural
NASA Astrophysics Data System (ADS)
Pairan, M. F.; Shamsudin, S. S.
2017-12-01
This project presents the performance analysis of the radial basis function neural network (RBF) trained with Minimal Resource Allocating Network (MRAN) algorithm for real-time identification of quadcopter. MRAN’s performance is compared with the RBF with Constant Trace algorithm for 2500 input-output pair data sampling. MRAN utilizes adding and pruning hidden neuron strategy to obtain optimum RBF structure, increase prediction accuracy and reduce training time. The results indicate that MRAN algorithm produces fast training time and more accurate prediction compared with standard RBF. The model proposed in this paper is capable of identifying and modelling a nonlinear representation of the quadcopter flight dynamics.
Design of a CO2 Twin Rotary Compressor for a Heat Pump Water Heater
NASA Astrophysics Data System (ADS)
Ahn, Jong Min; Kim, Woo Young; Kim, Hyun Jin; Cho, Sung Oug; Seo, Jong Cheun
2010-06-01
For a CO2 heat pump water heater, one-stage twin rotary compressor has been designed. As a design tool, computer simulation program for the compressor performance has been made. Validation of the simulation program has been carried out for a bench model compressor in a compressor calorimeter. Cooling capacity and the compressor input power were reasonably well compared between the simulation and the calorimeter test. Good agreement on P-V diagram between the simulation and the test was also obtained. With this validated compressor simulation program, parametric study has been performed to arrive at optimum dimensions for the compression chamber.
Fundamental studies on a heat driven lamp
NASA Technical Reports Server (NTRS)
Lawless, J. L.
1985-01-01
A detailed theoretical study of a heat-driven lamp has been performed. This lamp uses a plasma produced in a thermionic diode. The light is produced by the resonance transition of cesium. An important result of this study is that up to 30% of the input heat is predicted to be converted to light in this device. This is a major improvement over ordinary thermionic energy converters in which only approx. 1% is converted to resonance radiation. Efficiencies and optimum inter-electrode spacings have been found as a function of cathode temperature and the radiative escape factor. The theory developed explains the operating limits of the device.
Frequency comb generation in a silicon ring resonator modulator.
Demirtzioglou, Iosif; Lacava, Cosimo; Bottrill, Kyle R H; Thomson, David J; Reed, Graham T; Richardson, David J; Petropoulos, Periklis
2018-01-22
We report on the generation of an optical comb of highly uniform in power frequency lines (variation less than 0.7 dB) using a silicon ring resonator modulator. A characterization involving the measurement of the complex transfer function of the ring is presented and five frequency tones with a 10-GHz spacing are produced using a dual-frequency electrical input at 10 and 20 GHz. A comb shape comparison is conducted for different modulator bias voltages, indicating optimum operation at a small forward-bias voltage. A time-domain measurement confirmed that the comb signal was highly coherent, forming 20.3-ps-long pulses.
Young, Michelle N; Links, Mikaela J; Popat, Sudeep C; Rittmann, Bruce E; Torres, César I
2016-12-08
A microbial peroxide producing cell (MPPC) for H 2 O 2 production at the cathode was systematically optimized with minimal energy input. First, the stability of H 2 O 2 was evaluated using different catholytes, membranes, and catalyst materials. On the basis of these results, a flat-plate MPPC fed continuously using 200 mm NaCl catholyte at a 4 h hydraulic retention time was designed and operated, producing H 2 O 2 for 18 days. H 2 O 2 concentration of 3.1 g L -1 H 2 O 2 with 1.1 Wh g -1 H 2 O 2 power input was achieved in the MPPC. The high H 2 O 2 concentration was a result of the optimum materials selected. The small energy input was largely the result of the 0.5 cm distance between the anode and cathode, which reduced ionic transport losses. However, >50 % of operational overpotentials were due to the 4.5-5 pH unit difference between the anode and cathode chambers. The results demonstrate that a MPPC can continuously produce H 2 O 2 at high concentration by selecting compatible materials and appropriate operating conditions. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Third order intermodulation distortion in HTS Josephson Junction downconverter at 12GHz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Katsumi; Hayashi, Kunihiko; Fujimoto, Manabu
1994-12-31
Here the authors first report on the microwave characteristics of the third order intermodulation distortion(IMD3) in High-Tc Superconductor (HTS) Josephson Junction (JJ) Downconverter at 12GHz. They have successfully developed high quality nonlinear YBCO microbridge Josephson junctions for such an active MMIC as a mixer with RF, LO, IF and bias filters, which have been fabricated on (100) MgO substrates with 20mm x 20mm x 0.5mm dimensions. The minimum conversion loss of the JJ mixer is 11 dB at very small local microwave input power LO= {minus}20dBm which is two order less than Schottky diode mixer. Consequently, this small optimum LOmore » power gives the small RF input power at which the output IF power of the YBCO mixer saturates. Two-tone third-order intercept point(IP3) performance is a significantly important figure of merit typically used to define linearity of devices and circuits. The RF input power = {minus}15dBm at the IP3 point is obtained for the YBCO mixer at 15K and LO = 10.935GHz with {minus}22dBm. The have successfully measured the dependence of IMD3 on temperature, bias current and LO power.« less
The human role in space (THURIS) applications study. Final briefing
NASA Technical Reports Server (NTRS)
Maybee, George W.
1987-01-01
The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.
Wing flapping with minimum energy. [minimize the drag for a bending moment at the wing root
NASA Technical Reports Server (NTRS)
Jones, R. T.
1980-01-01
For slow flapping motions it is found that the minimum energy loss occurs when the vortex wake moves as a rigid surface that rotates about the wing root - a condition analogous to that determined for a slow-turning propeller. The optimum circulation distribution determined by this condition differs from the elliptic distribution, showing a greater concentration of lift toward the tips. It appears that very high propulsive efficiencies are obtained by flapping.
Bai, Yun; Zhang, Jiahua; Zhang, Sha; ...
2017-01-04
Here, recent studies have shown that global Penman-Monteith equation based (PM-based) models poorly simulate water stress when estimating evapotranspiration (ET) in areas having a Mediterranean climate (AMC). In this study, we propose a novel approach using precipitation, vertical root distribution (VRD), and satellite-retrieved vegetation information to simulate water stress in a PM-based model (RS-WBPM) to address this issue. A multilayer water balance module is employed to simulate the soil water stress factor (SWSF) of multiple soil layers at different depths. The water stress factor (WSF) for surface evapotranspiration is determined by VRD information and SWSF in each layer. Additionally, fourmore » older PM-based models (PMOV) are evaluated at 27 flux sites in AMC. Results show that PMOV fails to estimate the magnitude or capture the variation of ET in summer at most sites, whereas RS-WBPM is successful. The daily ET resulting from RS-WBPM incorporating recommended VI (NDVI for shrub and EVI for other biomes) agrees well with observations, with R2 = 0.60 ( RMSE = 18.72 W m-2) for all 27 sites and R2=0.62 ( RMSE 5 18.21 W m-2) for 25 nonagricultural sites. However, combined results from the optimum older PM-based models at specific sites show R2 values of only 0.50 ( RMSE 5 20.74 W m-2) for all 27 sites. RS-WBPM is also found to outperform other ET models that also incorporate a soil water balance module. As all inputs of RS-WBPM are globally available, the results from RS-WBPM are encouraging and imply the potential of its implementation on a regional and global scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Yun; Zhang, Jiahua; Zhang, Sha
Here, recent studies have shown that global Penman-Monteith equation based (PM-based) models poorly simulate water stress when estimating evapotranspiration (ET) in areas having a Mediterranean climate (AMC). In this study, we propose a novel approach using precipitation, vertical root distribution (VRD), and satellite-retrieved vegetation information to simulate water stress in a PM-based model (RS-WBPM) to address this issue. A multilayer water balance module is employed to simulate the soil water stress factor (SWSF) of multiple soil layers at different depths. The water stress factor (WSF) for surface evapotranspiration is determined by VRD information and SWSF in each layer. Additionally, fourmore » older PM-based models (PMOV) are evaluated at 27 flux sites in AMC. Results show that PMOV fails to estimate the magnitude or capture the variation of ET in summer at most sites, whereas RS-WBPM is successful. The daily ET resulting from RS-WBPM incorporating recommended VI (NDVI for shrub and EVI for other biomes) agrees well with observations, with R2 = 0.60 ( RMSE = 18.72 W m-2) for all 27 sites and R2=0.62 ( RMSE 5 18.21 W m-2) for 25 nonagricultural sites. However, combined results from the optimum older PM-based models at specific sites show R2 values of only 0.50 ( RMSE 5 20.74 W m-2) for all 27 sites. RS-WBPM is also found to outperform other ET models that also incorporate a soil water balance module. As all inputs of RS-WBPM are globally available, the results from RS-WBPM are encouraging and imply the potential of its implementation on a regional and global scale.« less
NASA Astrophysics Data System (ADS)
Štolc, Svorad; Bajla, Ivan
2010-01-01
In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the
NASA Astrophysics Data System (ADS)
Bai, Yun; Zhang, Jiahua; Zhang, Sha; Koju, Upama Ashish; Yao, Fengmei; Igbawua, Tertsea
2017-03-01
Recent studies have shown that global Penman-Monteith equation based (PM-based) models poorly simulate water stress when estimating evapotranspiration (ET) in areas having a Mediterranean climate (AMC). In this study, we propose a novel approach using precipitation, vertical root distribution (VRD), and satellite-retrieved vegetation information to simulate water stress in a PM-based model (RS-WBPM) to address this issue. A multilayer water balance module is employed to simulate the soil water stress factor (SWSF) of multiple soil layers at different depths. The water stress factor (WSF) for surface evapotranspiration is determined by VRD information and SWSF in each layer. Additionally, four older PM-based models (PMOV) are evaluated at 27 flux sites in AMC. Results show that PMOV fails to estimate the magnitude or capture the variation of ET in summer at most sites, whereas RS-WBPM is successful. The daily ET resulting from RS-WBPM incorporating recommended VI (NDVI for shrub and EVI for other biomes) agrees well with observations, with R2=0.60 (RMSE = 18.72 W m-2) for all 27 sites and R2=0.62 (RMSE = 18.21 W m-2) for 25 nonagricultural sites. However, combined results from the optimum older PM-based models at specific sites show R2 values of only 0.50 (RMSE = 20.74 W m-2) for all 27 sites. RS-WBPM is also found to outperform other ET models that also incorporate a soil water balance module. As all inputs of RS-WBPM are globally available, the results from RS-WBPM are encouraging and imply the potential of its implementation on a regional and global scale.
Life and reliability models for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Knorr, R. J.; Coy, J. J.
1982-01-01
Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.
NASA Astrophysics Data System (ADS)
Wulan, D. R.; Cahyaningsih, S.; Djaenudin
2017-03-01
In medium capacity, electroplating industry usually treats wastewater until 5 m3 per day. Heavy metal content becomes concern that should be reduced. Previous studies performed electrocoagulation method on laboratory scale, either batch or continuous. This study was aimed to compare the influence of voltage input variation into heavy metal removal in electroplating wastewater treatment using electrocoagulation process on laboratory-scale in order to determine the optimum condition for scaling up the reactor into pilot-scale. The laboratory study was performed in 1.5 L glass reactor in batch system using wastewater from electroplating industry, the voltage input varied at 20, 30 and 40 volt. The electrode consisted of aluminium 32 cm2 as sacrifice anode and copper 32 cm2 as cathode. During 120 min electrocoagulation process, the pH value was measured using pH meter, whereas the heavy metal of chromium, copper, iron, and zinc concentration were analysed using Atomic Absorption Spectrophotometer (AAS). Result showed that removal of heavy metals from wastewater increased due to the increasing of voltage input. Different initial concentration of heavy metals on wastewater, resulted the different detention time. At pilot-scale reactor with 30 V voltage input, chromium, iron, and zinc reached removal efficiency until 89-98%, when copper reached 79% efficiency. At 40V, removal efficiencies increased on same detention time, i.e. chromium, iron, and zinc reached 89-99%, whereas copper reached 85%. These removal efficiencies have complied the government standard except for copper that had higher initial concentration in wastewater. Kinetic rate also calculated in this study as the basic factor for scaling up the process.
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.
Hsu, Anne; Griffiths, Thomas L
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.
Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning
2016-01-01
A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576
Cabaraban, Maria Theresa I; Kroll, Charles N; Hirabayashi, Satoshi; Nowak, David J
2013-05-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature and LAI fields, and CMAQ provided NO2 concentrations. A base case simulation was conducted using built-in distributed i-Tree Eco tools, and simulations using different inputs were compared against this base case. Differences in land cover classification and tree cover between the distributed i-Tree Eco and WRF resulted in changes in estimated LAI, which in turn resulted in variations in simulated NO2 dry deposition. Estimated NO2 removal decreased when CMAQ-derived concentration was applied to the distributed i-Tree Eco simulation. Discrepancies in temperature inputs did little to affect estimates of NO2 removal by dry deposition to trees in Baltimore. Copyright © 2013 Elsevier Ltd. All rights reserved.
Selection on skewed characters and the paradox of stasis.
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2017-11-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Distributional Language Learning: Mechanisms and Models of ategory Formation.
Aslin, Richard N; Newport, Elissa L
2014-09-01
In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for any distributional learning mechanism to operate effectively within the linguistic domain. In particular, how does a naive learner determine the number of categories that are present in a corpus of linguistic input and what distributional cues enable the learner to assign individual lexical items to those categories? Contrary to the hypothesis that distributional learning and category (or rule) learning are separate mechanisms, the present article argues that these two seemingly different processes---acquiring specific structure from linguistic input and generalizing beyond that input to novel exemplars---actually represent a single mechanism. Evidence in support of this single-mechanism hypothesis comes from a series of artificial grammar-learning studies that not only demonstrate that adults can learn grammatical categories from distributional information alone, but that the specific patterning of distributional information among attested utterances in the learning corpus enables adults to generalize to novel utterances or to restrict generalization when unattested utterances are consistently absent from the learning corpus. Finally, a computational model of distributional learning that accounts for the presence or absence of generalization is reviewed and the implications of this model for linguistic-category learning are summarized.
The cholinergic basal forebrain in the ferret and its inputs to the auditory cortex.
Bajo, Victoria M; Leach, Nicholas D; Cordery, Patricia M; Nodal, Fernando R; King, Andrew J
2014-09-01
Cholinergic inputs to the auditory cortex can modulate sensory processing and regulate stimulus-specific plasticity according to the behavioural state of the subject. In order to understand how acetylcholine achieves this, it is essential to elucidate the circuitry by which cholinergic inputs influence the cortex. In this study, we described the distribution of cholinergic neurons in the basal forebrain and their inputs to the auditory cortex of the ferret, a species used increasingly in studies of auditory learning and plasticity. Cholinergic neurons in the basal forebrain, visualized by choline acetyltransferase and p75 neurotrophin receptor immunocytochemistry, were distributed through the medial septum, diagonal band of Broca, and nucleus basalis magnocellularis. Epipial tracer deposits and injections of the immunotoxin ME20.4-SAP (monoclonal antibody specific for the p75 neurotrophin receptor conjugated to saporin) in the auditory cortex showed that cholinergic inputs originate almost exclusively in the ipsilateral nucleus basalis. Moreover, tracer injections in the nucleus basalis revealed a pattern of labelled fibres and terminal fields that resembled acetylcholinesterase fibre staining in the auditory cortex, with the heaviest labelling in layers II/III and in the infragranular layers. Labelled fibres with small en-passant varicosities and simple terminal swellings were observed throughout all auditory cortical regions. The widespread distribution of cholinergic inputs from the nucleus basalis to both primary and higher level areas of the auditory cortex suggests that acetylcholine is likely to be involved in modulating many aspects of auditory processing. © 2014 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
A Review of Distributed Control Techniques for Power Quality Improvement in Micro-grids
NASA Astrophysics Data System (ADS)
Zeeshan, Hafiz Muhammad Ali; Nisar, Fatima; Hassan, Ahmad
2017-05-01
Micro-grid is typically visualized as a small scale local power supply network dependent on distributed energy resources (DERs) that can operate simultaneously with grid as well as in standalone manner. The distributed generator of a micro-grid system is usually a converter-inverter type topology acting as a non-linear load, and injecting harmonics into the distribution feeder. Hence, the negative effects on power quality by the usage of distributed generation sources and components are clearly witnessed. In this paper, a review of distributed control approaches for power quality improvement is presented which encompasses harmonic compensation, loss mitigation and optimum power sharing in multi-source-load distributed power network. The decentralized subsystems for harmonic compensation and active-reactive power sharing accuracy have been analysed in detail. Results have been validated to be consistent with IEEE standards.
Hendrickson, Phillip J; Yu, Gene J; Song, Dong; Berger, Theodore W
2016-01-01
This paper describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust nonrandom pattern of spiking best described as a spatiotemporal "clustering." To identify the network property or properties responsible for generating such firing "clusters," we progressively eliminated from the model key mechanisms, such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatiotemporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as "functional units" or "channels" that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics.
Hendrickson, Phillip J.; Yu, Gene J.; Song, Dong; Berger, Theodore W.
2016-01-01
Goal This manuscript describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. Methods The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust, non-random pattern of spiking best described as spatio-temporal “clustering”. To identify the network property or properties responsible for generating such firing “clusters”, we progressively eliminated from the model key mechanisms such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Conclusion Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatio-temporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as “functional units” or “channels” that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics. PMID:26087482
NASA Astrophysics Data System (ADS)
Sano, Kimikazu; Nagatani, Munehiko; Mutoh, Miwa; Murata, Koichi
This paper is a report on a high ESD breakdown-voltage InP HBT transimpedance amplifier IC for optical video distribution systems. To make ESD breakdown-voltage higher, we designed ESD protection circuits integrated in the TIA IC using base-collector/base-emitter diodes of InP HBTs and resistors. These components for ESD protection circuits have already existed in the employed InP HBT IC process, so no process modifications were needed. Furthermore, to meet requirements for use in optical video distribution systems, we studied circuit design techniques to obtain a good input-output linearity and a low-noise characteristic. Fabricated InP HBT TIA IC exhibited high human-body-model ESD breakdown voltages (±1000V for power supply terminals, ±200V for high-speed input/output terminals), good input-output linearity (less than 2.9-% duty-cycle-distortion), and low noise characteristic (10.7pA/√Hz averaged input-referred noise current density) with a -3-dB-down higher frequency of 6.9GHz. To the best of our knowledge, this paper is the first literature describing InP ICs with high ESD-breakdown voltages.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
A Method for the Selection of Exploration Areas for Unconformity Uranium Deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, DeVerle P.; Zaluski, Gerard; Marlatt, James
2009-06-15
The method we propose employs two analyses: (1) exploration simulation and risk valuation and (2) portfolio optimization. The first analysis, implemented by the investment worth system (IWS), uses Monte Carlo simulation to integrate a wide spectrum of uncertain and varied components to a relative frequency histogram for net present value of the exploration investment, which is converted to a risk-adjusted value (RAV). Iterative rerunning of the IWS enables the mapping of the relationship of RAV to magnitude of exploration expenditure, X. The second major analysis uses RAV vs. X maps to identify that subset (portfolio) of areas that maximizes themore » RAV of the firm's multiyear exploration budget. The IWS, which is demonstrated numerically, consists of six components based on the geologic description of a hypothetical basin and project area (PA) and a mix of hypothetical and actual conditions of an unidentified area. The geology is quantified and processed by Bayesian belief networks to produce the geology-based inputs required by the IWS. An exploration investment of $60 M produced a highly skewed distribution of net present value (NPV), having mean and median values of $4,160 M and $139 M, respectively. For hypothetical mining firm Minex, the RAV of the exploration investment of $60 M is only $110.7 M. An RAV that is less than 3% of mean NPV reflects the aversion by Minex to risk as well as the magnitude of risk implicit to the highly skewed NPV distribution and the probability of 0.45 for capital loss. Potential benefits of initiating exploration of a portfolio of areas, as contrasted with one area, include increased marginal productivity of exploration as well as reduced probability for nondiscovery. For an exogenously determined multiyear exploration budget, a conceptual framework for portfolio optimization is developed based on marginal RAV exploration products for candidate PAs. PORTFOLIO, a software developed to implement optimization, allocates exploration to PAs so that the RAV of the exploration budget is maximized. Moreover, PORTFOLIO provides a means to examine the impact of magnitude of budget on the composition of the exploration portfolio and the optimum allocation of exploration to PAs that comprise the portfolio. Using fictitious data for five PAs, a numerical demonstration is provided of the use of PORTFOLIO to identify those PAs that comprise the optimum exploration portfolio and to optimally allocate the multiyear budget across portfolio PAs.« less
Asakawa, Takashi; Kanno, Nozomu; Tonokura, Kenichi
2010-01-01
We have investigated the pressure dependence of the detection sensitivity of CO(2), N(2)O and CH(4) using wavelength modulation spectroscopy (WMS) with distributed feed-back diode lasers in the near infrared region. The spectral line shapes and the background noise of the second harmonics (2f) detection of the WMS were analyzed theoretically. We determined the optimum pressure conditions in the detection of CO(2), N(2)O and CH(4), by taking into consideration the background noise in the WMS. At the optimum total pressure for the detection of CO(2), N(2)O and CH(4), the limits of detection in the present system were determined.
Performance of European chemistry transport models as function of horizontal resolution
NASA Astrophysics Data System (ADS)
Schaap, M.; Cuvelier, C.; Hendriks, C.; Bessagnet, B.; Baldasano, J. M.; Colette, A.; Thunis, P.; Karam, D.; Fagerli, H.; Graff, A.; Kranenburg, R.; Nyiri, A.; Pay, M. T.; Rouïl, L.; Schulz, M.; Simpson, D.; Stern, R.; Terrenoire, E.; Wind, P.
2015-07-01
Air pollution causes adverse effects on human health as well as ecosystems and crop yield and also has an impact on climate change trough short-lived climate forcers. To design mitigation strategies for air pollution, 3D Chemistry Transport Models (CTMs) have been developed to support the decision process. Increases in model resolution may provide more accurate and detailed information, but will cubically increase computational costs and pose additional challenges concerning high resolution input data. The motivation for the present study was therefore to explore the impact of using finer horizontal grid resolution for policy support applications of the European Monitoring and Evaluation Programme (EMEP) model within the Long Range Transboundary Air Pollution (LRTAP) convention. The goal was to determine the "optimum resolution" at which additional computational efforts do not provide increased model performance using presently available input data. Five regional CTMs performed four runs for 2009 over Europe at different horizontal resolutions. The models' responses to an increase in resolution are broadly consistent for all models. The largest response was found for NO2 followed by PM10 and O3. Model resolution does not impact model performance for rural background conditions. However, increasing model resolution improves the model performance at stations in and near large conglomerations. The statistical evaluation showed that the increased resolution better reproduces the spatial gradients in pollution regimes, but does not help to improve significantly the model performance for reproducing observed temporal variability. This study clearly shows that increasing model resolution is advantageous, and that leaving a resolution of 50 km in favour of a resolution between 10 and 20 km is practical and worthwhile. As about 70% of the model response to grid resolution is determined by the difference in the spatial emission distribution, improved emission allocation procedures at high spatial and temporal resolution are a crucial factor for further model resolution improvements.
Proceedings of the 1st Army Installation Energy Security and Independence Conference
2007-03-01
robustness of Transmission and Distribution system, and that pro- motes the use of demand response, CHP, and use of renewable intermit - ERDC/CERL TR...charged during low load periods. • Generation is run at optimum level during high loads. • Storage follows load and provides fast power balance during
Line Lengths and Starch Scores.
ERIC Educational Resources Information Center
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
Digital Communications in Spatially Distributed Interference Channels.
1982-12-01
July 1980 through 31 March 1981. This report is organized into five parts. Part I describes an optimum recivr tructure fordgtlcmutatnI ~ tal itiue (over...Jelinek, and J. Raviv , "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate", IEEE Trans. Inform. Theory, Vol. IT-20, pp. 284-287, March 1974
NASA Astrophysics Data System (ADS)
Zhou, Y.; Gu, H.; Williams, C. A.
2017-12-01
Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.
Optimum size of nanorods for heating application
NASA Astrophysics Data System (ADS)
Seshadri, G.; Thaokar, Rochish; Mehra, Anurag
2014-08-01
Magnetic nanoparticles (MNP's) have become increasingly important in heating applications such as hyperthermia treatment of cancer due to their ability to release heat when a remote external alternating magnetic field is applied. It has been shown that the heating capability of such particles varies significantly with the size of particles used. In this paper, we theoretically evaluate the heating capability of rod-shaped MNP's and identify conditions under which these particles display highest efficiency. For optimally sized monodisperse particles, the power generated by rod-shaped particles is found to be equal to that generated by spherical particles. However, for particles which are not mono dispersed, rod-shaped particles are found to be more effective in heating as a result of the greater spread in the power density distribution curve. Additionally, for rod-shaped particles, a dispersion in the radius of the particle contributes more to the reduction in loss power when compared to a dispersion in the length. We further identify the optimum size, i.e the radius and length of nanorods, given a bi-variate log-normal distribution of particle size in two dimensions.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
Khajeh, Mostafa; Sarafraz-Yazdi, Ali; Natavan, Zahra Bameri
2016-03-01
The aim of this research was to develop a low price and environmentally friendly adsorbent with abundant of source to remove methylene blue (MB) from water samples. Sawdust solid-phase extraction coupled with high-performance liquid chromatography was used for the extraction and determination of MB. In this study, an experimental data-based artificial neural network model is constructed to describe the performance of sawdust solid-phase extraction method for various operating conditions. The pH, time, amount of sawdust, and temperature were the input variables, while the percentage of extraction of MB was the output. The optimum operating condition was then determined by genetic algorithm method. The optimized conditions were obtained as follows: 11.5, 22.0 min, 0.3 g, and 26.0°C for pH of the solution, extraction time, amount of adsorbent, and temperature, respectively. Under these optimum conditions, the detection limit and relative standard deviation were 0.067 μg L(-1) and <2.4%, respectively. The Langmuir and Freundlich adsorption models were applied to describe the isotherm constant and for the removal and determination of MB from water samples. © The Author(s) 2013.
Guiomar, Fernando P; Reis, Jacklyn D; Carena, Andrea; Bosco, Gabriella; Teixeira, António L; Pinto, Armando N
2013-01-14
Employing 100G polarization-multiplexed quaternary phase-shift keying (PM-QPSK) signals, we experimentally demonstrate a dual-polarization Volterra series nonlinear equalizer (VSNE) applied in frequency-domain, to mitigate intra-channel nonlinearities. The performance of the dual-polarization VSNE is assessed in both single-channel and in wavelength-division multiplexing (WDM) scenarios, providing direct comparisons with its single-polarization version and with the widely studied back-propagation split-step Fourier (SSF) approach. In single-channel transmission, the optimum power has been increased by about 1 dB, relatively to the single-polarization equalizers, and up to 3 dB over linear equalization, with a corresponding bit error rate (BER) reduction of up to 63% and 85%, respectively. Despite of the impact of inter-channel nonlinearities, we show that intra-channel nonlinear equalization is still able to provide approximately 1 dB improvement in the optimum power and a BER reduction of ~33%, considering a 66 GHz WDM grid. By means of simulation, we demonstrate that the performance of nonlinear equalization can be substantially enhanced if both optical and electrical filtering are optimized, enabling the VSNE technique to outperform its SSF counterpart at high input powers.
Kopp, Michael; Hermisson, Joachim
2009-01-01
We consider a population that adapts to a gradually changing environment. Our aim is to describe how ecological and genetic factors combine to determine the genetic basis of adaptation. Specifically, we consider the evolution of a polygenic trait that is under stabilizing selection with a moving optimum. The ecological dynamics are defined by the strength of selection, \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\mathrm{\\tilde {{\\sigma}}}}\\end{equation*}\\end{document}, and the speed of the optimum, \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}\\tilde {{\\upsilon}}\\end{equation*}\\end{document}; the key genetic parameters are the mutation rate Θ and the variance of the effects of new mutations, ω. We develop analytical approximations within an “adaptive-walk” framework and describe how selection acts as a sieve that transforms a given distribution of new mutations into the distribution of adaptive substitutions. Our analytical results are complemented by individual-based simulations. We find that (i) the ecological dynamics have a strong effect on the distribution of adaptive substitutions and their impact depends largely on a single composite measure \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{pmc} \\usepackage[Euler]{upgreek} \\pagestyle{empty} \\oddsidemargin -1.0in \\begin{document} \\begin{equation*}{\\mathrm{{\\gamma}}}=\\tilde {{\\upsilon}}/({\\mathrm{\\tilde {{\\sigma}}}}{\\Theta}{\\mathrm{{\\omega}}}^{3})\\end{equation*}\\end{document}, which combines the ecological and genetic parameters; (ii) depending on γ, we can distinguish two distinct adaptive regimes: for large γ the adaptive process is mutation limited and dominated by genetic constraints, whereas for small γ it is environmentally limited and dominated by the external ecological dynamics; (iii) deviations from the adaptive-walk approximation occur for large mutation rates, when different mutant alleles interact via linkage or epistasis; and (iv) in contrast to predictions from previous models assuming constant selection, the distribution of adaptive substitutions is generally not exponential. PMID:19805820
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
NASA Astrophysics Data System (ADS)
Kushnir, A. F.; Troitsky, E. V.; Haikin, L. M.; Dainty, A.
1999-06-01
A semi-automatic procedure has been developed to achieve statistically optimum discrimination between earthquakes and explosions at local or regional distances based on a learning set specific to a given region. The method is used for step-by-step testing of candidate discrimination features to find the optimum (combination) subset of features, with the decision taken on a rigorous statistical basis. Linear (LDF) and Quadratic (QDF) Discriminant Functions based on Gaussian distributions of the discrimination features are implemented and statistically grounded; the features may be transformed by the Box-Cox transformation z=(1/ α)( yα-1) to make them more Gaussian. Tests of the method were successfully conducted on seismograms from the Israel Seismic Network using features consisting of spectral ratios between and within phases. Results showed that the QDF was more effective than the LDF and required five features out of 18 candidates for the optimum set. It was found that discrimination improved with increasing distance within the local range, and that eliminating transformation of the features and failing to correct for noise led to degradation of discrimination.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
NASA Technical Reports Server (NTRS)
1973-01-01
This user's manual describes the FORTRAN IV computer program developed to compute the total vertical load, normal concentrated pressure loads, and the center of pressure of typical SRB water impact slapdown pressure distributions specified in the baseline configuration. The program prepares the concentrated pressure load information in punched card format suitable for input to the STAGS computer program. In addition, the program prepares for STAGS input the inertia reacting loads to the slapdown pressure distributions.
Aling, Joanna; Podczeck, Fridrun
2012-11-20
The aim of this work was to investigate the plug formation and filling properties of powdered herbal leaves using hydrogenated cotton seed oil as an alternative lubricant. In a first step, unlubricated and lubricated herbal powders were studied on a small scale using a plug simulator, and low-force compression physics and parameterization techniques were used to narrow down the range in which the optimum amount of lubricant required would be found. In a second step these results were complemented with investigations into the flow properties of the powders based on packing (tapping) experiments to establish the final optimum lubricant concentration. Finally, capsule filling of the optimum formulations was undertaken using an instrumented tamp filling machine. This work has shown that hydrogenated cotton seed oil can be used advantageously for the lubrication of herbal leaf powders. Stickiness as observed with magnesium stearate did not occur, and the optimum lubricant concentration was found to be less than that required for magnesium stearate. In this work, lubricant concentrations of 1% or less hydrogenated cotton seed oil were required to fill herbal powders into capsules on the instrumented tamp-filling machine. It was found that in principle all powders could be filled successfully, but that for some powders the use of higher compression settings was disadvantageous. Relationships between the particle size distributions of the powders, their flow and consolidation as well as their filling properties could be identified by multivariate statistical analysis. The work has demonstrated that a combination of the identification of plug formation and powder flow properties is helpful in establishing the optimum lubricant concentration required using a small quantity of powder and a powder plug simulator. On an automated tamp-filling machine, these optimum formulations produced satisfactory capsules in terms of coefficient of fill weight variability and capsule weight. Copyright © 2012 Elsevier B.V. All rights reserved.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wack, L. J., E-mail: linda-jacqueline.wack@med.uni
Purpose: To compare a dedicated simulation model for hypoxia PET against tumor microsections stained for different parameters of the tumor microenvironment. The model can readily be adapted to a variety of conditions, such as different human head and neck squamous cell carcinoma (HNSCC) xenograft tumors. Methods: Nine different HNSCC tumor models were transplanted subcutaneously into nude mice. Tumors were excised and immunoflourescently labeled with pimonidazole, Hoechst 33342, and CD31, providing information on hypoxia, perfusion, and vessel distribution, respectively. Hoechst and CD31 images were used to generate maps of perfused blood vessels on which tissue oxygenation and the accumulation of themore » hypoxia tracer FMISO were mathematically simulated. The model includes a Michaelis–Menten relation to describe the oxygen consumption inside tissue. The maximum oxygen consumption rate M{sub 0} was chosen as the parameter for a tumor-specific optimization as it strongly influences tracer distribution. M{sub 0} was optimized on each tumor slice to reach optimum correlations between FMISO concentration 4 h postinjection and pimonidazole staining intensity. Results: After optimization, high pixel-based correlations up to R{sup 2} = 0.85 were found for individual tissue sections. Experimental pimonidazole images and FMISO simulations showed good visual agreement, confirming the validity of the approach. Median correlations per tumor model varied significantly (p < 0.05), with R{sup 2} ranging from 0.20 to 0.54. The optimum maximum oxygen consumption rate M{sub 0} differed significantly (p < 0.05) between tumor models, ranging from 2.4 to 5.2 mm Hg/s. Conclusions: It is feasible to simulate FMISO distributions that match the pimonidazole retention patterns observed in vivo. Good agreement was obtained for multiple tumor models by optimizing the oxygen consumption rate, M{sub 0}, whose optimum value differed significantly between tumor models.« less
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Short term forecasting for HFSWR sea surface current mapping using artificial neural network
NASA Astrophysics Data System (ADS)
Lai, J. W.; Lu, Y. C.; Hsieh, C. M.; Liau, J. M.; Yang, W. C.
2016-02-01
Taiwan Ocean Research Institute (TORI) established the Taiwan Ocean Radar Observing System (TOROS) based on the CODAR high frequency surface wave radar (HFSWR). The TOROS is the first network having complete, contiguous HFSWR coverage of nation's coastline in the world. This network consisting of 17 SeaSonde radars offers coverage across approximately 190,000 square kilometers an area, over five times the size of Taiwan's entire land mass. In the southernmost and narrowest part of Taiwan, two 13 MHz and one 24 MHz radars were established along the NanWan Bay since June, 2014. NanWan Bay, the southern tip of Taiwan, is a southward semi-enclosed basin bounded by two capes and is open to the Luzon Strait. The distance between the two caps is around 12 km, and the distance from the northernmost point of the bay to the caps are 5 and 11 km, respectively. Strong tidal currents dominate the ocean circulation in the NanWan Bay and induce obvious upwelling of cold water that intrudes on to the shallow regions of NanWan Bay around spring tides. From late fall to early spring, the seaward wind dominated by the northeast monsoon often destratifies the water column and decreases the sea surface temperature inside the Bay (Lee et al, 1997). Furthermore, the Nanwan Bay is famous with well-developed fringing reefs distributed along the shoreline. In this area, 230 species of scleractinian corals, nine species of non-scleractinian reef-building corals, and 40 species of alcyonacean corals have been recorded (Dai, 1991). NanWan, in the shape of a beautiful arch, attracts large crowds of people to take all kinds of beach or water activities every summer. In order to improve the applicability of HFSWR ocean surface current data on search and rescue issue and evaluation of coral spawn dispersal, a short term forecasting model using artificial neural network (ANN) was developed in this study. That ocean surface current vectors obtained from tidal theory are added as inputs in artificial neural network model is found to improve prediction ability for current vectors. The optimum structure of the present ANN model for each ocean current grid is set up from examining the learning rate, moment factor, input parameters, numbers of hidden layer, learning times and input length. Results show that the ANN model have better accuracy of short-term forecasting.
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Studies of transverse momentum dependent parton distributions and Bessel weighting
Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...
2015-03-01
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Liu, Xiaodong; Lou, Chuangneng; Xu, Liqiang; Sun, Liguang
2012-09-01
Total cadmium (Cd) concentrations in four ornithogenic coral-sand sedimentary profiles displayed a strong positive correlation with guano-derived phosphorus, but had no correlation with plant-originated organic matter in the top sediments. These results indicate that the total Cd distributions were predominantly controlled by guano input. Bioavailable Cd and zinc (Zn) had a greater input rate in the top sediments with respect to total Cd and total Zn, and a positive correlation with total organic carbon (TOC) derived from plant humus. Multi-regression analysis showed that the total Cd and TOC explained over 80% of the variation of bioavailable Cd, suggesting that both guano and plant inputs could significantly influence the distribution of bioavailable Cd, and that plant biocycling processes contribute more to the recent increase of bioavailable Cd. A pollution assessment indicates that the Yongle archipelago is moderately to strongly polluted with guano-derived Cd. Copyright © 2012 Elsevier Ltd. All rights reserved.
Studies of transverse momentum dependent parton distributions and Bessel weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghasyan, M.; Avakian, H.; De Sanctis, E.
In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
NASA Astrophysics Data System (ADS)
Baranov, G. A.; Efremov, Yu V.; Smirnov, A. S.; Frolov, K. S.; Shevchenko, Yu I.
1989-02-01
An investigation was made of the distributions of the gain and input energy per unit volume along the discharge chamber length in a CO2-N2-He mixture stream excited by an rf discharge. The dependences of the gain and discharge luminescence intensity on the coordinate x were determined along the direction of the gas flow. The discharge luminescence intensity was shown to characterize the input energy distribution along the X axis. Calculations were made of the small-signal gain in the rf discharge. Experimental data on the distributions of the input energy and of the electric field in the discharge and the average values of the kinetic coefficients were used in the calculations. The efficiency of pumping CO2 lasers with an rf discharge was found to be close to the dc pumping efficiency. The results obtained provide evidence of promising prospects for using an rf discharge in fast-flow industrial lasers.
Critical analysis of the condensation of water vapor at external surface of the duct
NASA Astrophysics Data System (ADS)
Kumar, Dileep; Memon, Rizwan Ahmed; Memon, Abdul Ghafoor; Ali, Intizar; Junejo, Awais
2018-01-01
In this paper, the effects of contraction of the insulation of the air duct of heating, ventilation, and air conditioning (HVAC) system is investigated. The compression of the insulation contracts it at joint, turn and other points of the duct. The energy loss and the condensation resulted from this contraction are also estimated. A mathematical model is developed to simulate the effects of this contraction on the heat gain, supply air temperature and external surface temperature of the duct. The simulation uses preliminary data obtained from an HVAC system installed in a pharmaceutical company while varying the operating conditions. The results reveal that insulation thickness should be kept greater than 30 mm and the volume flow rate of the selected air distribution system should be lower than 1.4m3/s to subside condensation on the external surface of the duct. Additionally, the optimum insulation thickness was determined by considering natural gas as an energy source and fiberglass as an insulation material. The optimum insulation thickness determined for different duct sizes varies from 28 to 45 mm, which is greater than the critical insulation thickness. Therefore, the chances of condensation on the external surface of the duct could be avoided at an optimum insulation thickness. Moreover, the effect of pressure loss coefficient of the duct fitting of air distribution system is estimated. The electricity consumption in air handling unit (AHU) decreases from 2.1 to 1.5 kW by decreasing the pressure loss coefficient from 1.5 to 0.5.
Critical analysis of the condensation of water vapor at external surface of the duct
NASA Astrophysics Data System (ADS)
Kumar, Dileep; Memon, Rizwan Ahmed; Memon, Abdul Ghafoor; Ali, Intizar; Junejo, Awais
2018-07-01
In this paper, the effects of contraction of the insulation of the air duct of heating, ventilation, and air conditioning (HVAC) system is investigated. The compression of the insulation contracts it at joint, turn and other points of the duct. The energy loss and the condensation resulted from this contraction are also estimated. A mathematical model is developed to simulate the effects of this contraction on the heat gain, supply air temperature and external surface temperature of the duct. The simulation uses preliminary data obtained from an HVAC system installed in a pharmaceutical company while varying the operating conditions. The results reveal that insulation thickness should be kept greater than 30 mm and the volume flow rate of the selected air distribution system should be lower than 1.4m3/s to subside condensation on the external surface of the duct. Additionally, the optimum insulation thickness was determined by considering natural gas as an energy source and fiberglass as an insulation material. The optimum insulation thickness determined for different duct sizes varies from 28 to 45 mm, which is greater than the critical insulation thickness. Therefore, the chances of condensation on the external surface of the duct could be avoided at an optimum insulation thickness. Moreover, the effect of pressure loss coefficient of the duct fitting of air distribution system is estimated. The electricity consumption in air handling unit (AHU) decreases from 2.1 to 1.5 kW by decreasing the pressure loss coefficient from 1.5 to 0.5.
Zou, An-Min; Kumar, Krishna Dev
2012-07-01
This brief considers the attitude coordination control problem for spacecraft formation flying when only a subset of the group members has access to the common reference attitude. A quaternion-based distributed attitude coordination control scheme is proposed with consideration of the input saturation and with the aid of the sliding-mode observer, separation principle theorem, Chebyshev neural networks, smooth projection algorithm, and robust control technique. Using graph theory and a Lyapunov-based approach, it is shown that the distributed controller can guarantee the attitude of all spacecraft to converge to a common time-varying reference attitude when the reference attitude is available only to a portion of the group of spacecraft. Numerical simulations are presented to demonstrate the performance of the proposed distributed controller.
Hierarchical resilience with lightweight threads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Kyle Bruce
2011-10-01
This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
Development and fabrication of S-band chip varactor parametric amplifier
NASA Technical Reports Server (NTRS)
Kramer, E.
1974-01-01
A noncryogenic, S-band parametric amplifier operating in the 2.2 to 2.3 GHz band and having an average input noise temperature of less than 30 K was built and tested. The parametric amplifier module occupies a volume of less than 1-1/4 cubic feet and weighs less than 60 pounds. The module is designed for use in various NASA ground stations to replace larger, more complex cryogenic units which require considerably more maintenance because of the cryogenic refrigeration system employed. The amplifier can be located up to 15 feet from the power supply unit. Optimum performance was achieved through the use of high-quality unpackaged (chip) varactors in the amplifier design.
Rocket ascent G-limited moment-balanced optimization program (RAGMOP)
NASA Technical Reports Server (NTRS)
Lyons, J. T.; Woltosz, W. S.; Abercrombie, G. E.; Gottlieb, R. G.
1972-01-01
This document describes the RAGMOP (Rocket Ascent G-limited Momentbalanced Optimization Program) computer program for parametric ascent trajectory optimization. RAGMOP computes optimum polynomial-form attitude control histories, launch azimuth, engine burn-time, and gross liftoff weight for space shuttle type vehicles using a search-accelerated, gradient projection parameter optimization technique. The trajectory model available in RAGMOP includes a rotating oblate earth model, the option of input wind tables, discrete and/or continuous throttling for the purposes of limiting the thrust acceleration and/or the maximum dynamic pressure, limitation of the structural load indicators (the product of dynamic pressure with angle-of-attack and sideslip angle), and a wide selection of intermediate and terminal equality constraints.
Chemical scavenging of post-consumed clothes.
Barot, Amit A; Sinha, Vijay Kumar
2015-12-01
Aiming toward the rectification of fiber grade PET waste accumulation as well as recycling and providing a technically viable route leading to preservation of the natural resources and environment, the post consumed polyester clothes were chemically recycled. Post consumed polyester clothes were recycled into bis(2-hydroxyethyl) terephthalate (BHET) monomer in the presence of ethylene glycol as depolymerising agent and zinc acetate as catalyst. Depolymerized product was characterized by chemical as well as analytical techniques. The fiber grade PET was eventually converted into BHET monomer with nearly 90% yield by employing 1% catalyst concentration and at optimum temperature of 180°C without mechanical input of stirring condition. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Eberle, W. R.
1981-01-01
A computer program to calculate the wake downwind of a wind turbine was developed. Turbine wake characteristics are useful for determining optimum arrays for wind turbine farms. The analytical model is based on the characteristics of a turbulent coflowing jet with modification for the effects of atmospheric turbulence. The program calculates overall wake characteristics, wind profiles, and power recovery for a wind turbine directly in the wake of another turbine, as functions of distance downwind of the turbine. The calculation procedure is described in detail, and sample results are presented to illustrate the general behavior of the wake and the effects of principal input parameters.
Information transfer in verbal presentations at scientific meetings
NASA Astrophysics Data System (ADS)
Flinn, Edward A.
The purpose of this note is to suggest a quantitative approach to deciding how much time to give a speaker at a scientific meeting. The elementary procedure is to use the preacher's rule of thumb that no souls are saved after the first 20 minutes. This is in qualitative agreement with the proverb that one cannot listen to a single voice for more than an hour without going to sleep. A refinement of this crude approach can be made by considering the situation from the point of view of a linear physical system with an input, a transfer function, and an output. We attempt here to derive an optimum speaking time through these considerations.
Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fiber powder
Su, Jing; Vielnascher, Robert; Silva, Carla; Cavaco-Paulo, Artur; Guebitz, Georg M.
2018-01-01
Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fibre powder was investigated. The effect of ultrasonic probe depth and power input parameters on the type and amount of products extracted was assessed. The results of input energy and radical formation correlated with the calculated values for the anti-nodal point (λ/4; 16.85 mm, maximum amplitude) of the ultrasonic wave in aqueous medium. Ultrasonic treatment at optimum probe depth of 15 mm improve 2.6-fold the extraction efficiencies of hemicellulose and phenolic lignin compounds from bamboo bast fibre powder. LC-Ms-Tof (liquid chromatography-mass spectrometry-time of flight) analysis indicated that ultrasound led to the extraction of coniferyl alcohol, sinapyl alcohol, vanillic acid, cellobiose, in contrast to boiling water extraction only. At optimized conditions, ultrasound caused the formation of radicals confirmed by the presence of (+)-pinoresinol which resulted from the radical coupling of coniferyl alcohol. Ultrasounds revealed to be an efficient methodology for the extraction of hemicellulosic and phenolic compounds from woody bamboo without the addition of harmful solvents. PMID:29856764
Monolithic acoustic graphene transistors based on lithium niobate thin film
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, B.-H.; Zhang, H.-X.; Zhang, H.; Zhang, M.-L.; Zhang, D.-H.; Pang, W.
2018-05-01
This paper introduces an on-chip acoustic graphene transistor based on lithium niobate thin film. The graphene transistor is embedded in a microelectromechanical systems (MEMS) acoustic wave device, and surface acoustic waves generated by the resonator induce a macroscopic current in the graphene due to the acousto-electric (AE) effect. The acoustic resonator and the graphene share the lithium niobate film, and a gate voltage is applied through the back side of the silicon substrate. The AE current induced by the Rayleigh and Sezawa modes was investigated, and the transistor outputs a larger current in the Rayleigh mode because of a larger coupling to velocity ratio. The output current increases linearly with the input radiofrequency power and can be effectively modulated by the gate voltage. The acoustic graphene transistor realized a five-fold enhancement in the output current at an optimum gate voltage, outperforming its counterpart with a DC input. The acoustic graphene transistor demonstrates a paradigm for more-than-Moore technology. By combining the benefits of MEMS and graphene circuits, it opens an avenue for various system-on-chip applications.
[System analytical approach of lung function and hemodynamics].
Naszlady, Attila; Kiss, Lajos
2009-02-15
The authors critically analyse the traditional views in physiology and complete them with new statements based on computer model simulations of lung function and of hemodynamics. Conclusions are derived for the clinical practice as follows: the four-dimensional function curves are similar in both systems; there is a "waterfall" zone in the pulmonary blood perfusion; the various time constants of pulmonary regions can modify the blood gas values; pulmonary capillary pressure is equal to pulmonary arterial diastole pressure; heart is not a pressure pump, but a flow source; ventricles are loaded by the input impedance of the arterial systems and not by the total vascular (ohmlike) resistance; optimum heart rate in rest depends on the length of the aorta; this law of heart rate, based on the principle of resonance is valid along the mammalian allometric line; tachycardia decreases the input impedance; using positive end expiratory pressure respirators the blood gas of pulmonary artery should be followed; coronary circulation should be assessed in beat per milliliter, the milliliter per minute may be false. These statements are compared to related references.
NASA Astrophysics Data System (ADS)
Bijanrostami, Kh.; Barenji, R. Vatankhah; Hashemipour, M.
2017-02-01
The tensile behavior of the underwater dissimilar friction stir welded AA6061 and AA7075 aluminum alloy joints was investigated for the first time. For this aim, the joints were welded at different conditions and tensile test was conducted for measuring the strength and elongation of them. In addition, the microstructure of the joints was characterized by means of optical and transmission electron microscopes. Scanning electron microscope was used for fractography of the joints. Furthermore, the process parameters and tensile properties of the joints were correlated and optimized. The results revealed that the maximum tensile strength of 237.3 MPa and elongation of 41.2% could be obtained at a rotational speed 1853 rpm and a traverse speed of 50 mm/min. In comparison with the optimum condition, higher heat inputs caused grain growth and reduction in dislocation density and hence led to lower strength. The higher elongations for the joints welded at higher heat inputs were due to lower dislocation density inside the grains, which was consistent with a more ductile fracture of them.
Autostereoscopic display based on two-layer lenticular lenses.
Zhao, Wu-Xiang; Wang, Qiong-Hua; Wang, Ai-Hong; Li, Da-Hai
2010-12-15
An autostereoscopic display based on two-layer lenticular lenses is proposed. The two-layer lenticular lenses include one-layer conventional lenticular lenses and additional one-layer concentrating-light lenticular lenses. Two prototypes of the proposed and conventional autostereoscopic displays are developed. At the optimum three-dimensional view distance, the luminance distribution of the prototypes along the horizontal direction is measured. By calculating the luminance distribution, the crosstalk of the prototypes is obtained. Compared with the conventional autostereoscopic display, the proposed autostereoscopic display has less crosstalk, a wider view angle, and higher efficiency of light utilization.
Marine Mammal Habitat in Ecuador: Seasonal Abundance and Environmental Distribution
2010-06-01
derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial Undercurrent and Peru Current...is initiated by the subsurface derived macronutrients ) is enhanced by iron inputs derived from the island platform. The confluence of the Equatorial
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-04-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting
NASA Astrophysics Data System (ADS)
Gamberg, Leonard
2015-10-01
We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.
Maximally informative pairwise interactions in networks
Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.
2010-01-01
Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153
ERIC Educational Resources Information Center
Bachman, C. H.
1988-01-01
Presents examples to show the ubiquitous nature of geometry. Illustrates the relationship between the perimeter and area of two-dimensional objects and between the area and volume of three-dimensional objects. Provides examples of distribution systems, optimum shapes, structural strength, biological heat engines, man's size, and reflection and…
Design of helicopter rotor blades for optimum dynamic characteristics
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ko, T.; Korn, A.; Rossow, M. P.
1985-01-01
The mass and stiffness distributions for helicopter rotor blades are tailored in such a way to give a predetermined placement of blade natural frequencies. The optimal design is pursued with respect of minimum weight, sufficient inertia, and reasonable dynamic characteristics. Finite element techniques are used as a tool. Rotor types include hingeless, articulated, and teetering.
Optimum target sizes for a sequential sawing process
H. Dean Claxton
1972-01-01
A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Satellite-derived potential evapotranspiration for distributed hydrologic runoff modeling
NASA Astrophysics Data System (ADS)
Spies, R. R.; Franz, K. J.; Bowman, A.; Hogue, T. S.; Kim, J.
2012-12-01
Distributed models have the ability of incorporating spatially variable data, especially high resolution forcing inputs such as precipitation, temperature and evapotranspiration in hydrologic modeling. Use of distributed hydrologic models for operational streamflow prediction has been partially hindered by a lack of readily available, spatially explicit input observations. Potential evapotranspiration (PET), for example, is currently accounted for through PET input grids that are based on monthly climatological values. The goal of this study is to assess the use of satellite-based PET estimates that represent the temporal and spatial variability, as input to the National Weather Service (NWS) Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). Daily PET grids are generated for six watersheds in the upper Mississippi River basin using a method that applies only MODIS satellite-based observations and the Priestly Taylor formula (MODIS-PET). The use of MODIS-PET grids will be tested against the use of the current climatological PET grids for simulating basin discharge. Gridded surface temperature forcing data are derived by applying the inverse distance weighting spatial prediction method to point-based station observations from the Automated Surface Observing System (ASOS) and Automated Weather Observing System (AWOS). Precipitation data are obtained from the Climate Prediction Center's (CPC) Climatology-Calibrated Precipitation Analysis (CCPA). A-priori gridded parameters for the Sacramento Soil Moisture Accounting Model (SAC-SMA), Snow-17 model, and routing model are initially obtained from the Office of Hydrologic Development and further calibrated using an automated approach. The potential of the MODIS-PET to be used in an operational distributed modeling system will be assessed with the long-term goal of promoting research to operations transfers and advancing the science of hydrologic forecasting.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Liu, Hong; Chen, Bin; Zheng, Hongmei; Li, Yating
2014-06-01
Discovering ways in which to increase the sustainability of the metabolic processes involved in urbanization has become an urgent task for urban design and management in China. As cities are analogous to living organisms, the disorders of their metabolic processes can be regarded as the cause of "urban disease". Therefore, identification of these causes through metabolic process analysis and ecological element distribution through the urban ecosystem's compartments will be helpful. By using Beijing as an example, we have compiled monetary input-output tables from 1997, 2000, 2002, 2005, and 2007 and calculated the intensities of the embodied ecological elements to compile the corresponding implied physical input-output tables. We then divided Beijing's economy into 32 compartments and analyzed the direct and indirect ecological intensities embodied in the flows of ecological elements through urban metabolic processes. Based on the combination of input-output tables and ecological network analysis, the description of multiple ecological elements transferred among Beijing's industrial compartments and their distribution has been refined. This hybrid approach can provide a more scientific basis for management of urban resource flows. In addition, the data obtained from distribution characteristics of ecological elements may provide a basic data platform for exploring the metabolic mechanism of Beijing.
Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I
2011-01-01
The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Logistical constraints lead to an intermediate optimum in outbreak response vaccination
Shea, Katriona; Ferrari, Matthew
2018-01-01
Dynamic models in disease ecology have historically evaluated vaccination strategies under the assumption that they are implemented homogeneously in space and time. However, this approach fails to formally account for operational and logistical constraints inherent in the distribution of vaccination to the population at risk. Thus, feedback between the dynamic processes of vaccine distribution and transmission might be overlooked. Here, we present a spatially explicit, stochastic Susceptible-Infected-Recovered-Vaccinated model that highlights the density-dependence and spatial constraints of various diffusive strategies of vaccination during an outbreak. The model integrates an agent-based process of disease spread with a partial differential process of vaccination deployment. We characterize the vaccination response in terms of a diffusion rate that describes the distribution of vaccination to the population at risk from a central location. This generates an explicit trade-off between slow diffusion, which concentrates effort near the central location, and fast diffusion, which spreads a fixed vaccination effort thinly over a large area. We use stochastic simulation to identify the optimum vaccination diffusion rate as a function of population density, interaction scale, transmissibility, and vaccine intensity. Our results show that, conditional on a timely response, the optimal strategy for minimizing outbreak size is to distribute vaccination resource at an intermediate rate: fast enough to outpace the epidemic, but slow enough to achieve local herd immunity. If the response is delayed, however, the optimal strategy for minimizing outbreak size changes to a rapidly diffusive distribution of vaccination effort. The latter may also result in significantly larger outbreaks, thus suggesting a benefit of allocating resources to timely outbreak detection and response. PMID:29791432
Bahrieh, Garsha; Özgür, Ebru; Koyuncuoğlu, Aziz; Erdem, Murat; Gündüz, Ufuk; Külah, Haluk
2015-08-01
This is a study of in-plane and out-of-plane distribution of rotational torque (ROT-T) and effective electric field (EEF) on electrorotation (ER) devices with 3D electrodes using finite element modeling (FEM) and experimental method. The objective of this study is to investigate electrical characteristics of the ER devices with five different electrode geometries and obtain an optimum structure for ER experiments. Further, it provides a comparison between characteristics of the 3D electrodes and traditionally used 2D electrodes. 3D distributions of EEF were studied by the time-variant FEM. FEM results were verified experimentally by studying the rotation of biological cells. The results show that the variations of ROT-T and EEF over the measurement area of the devices are considerably large. This can potentially lead to misinterpretation of recorded data. Therefore, it is essential to specify the boundaries of the measurement area with minimum deviation from the central EEF. For this purpose, FE analyses were utilized to specify the optimal region. Thereby, with confining the measurements to these regions, the dependency of ROT-T on the spatial position of the particles can be eliminated. Comparisons have been made on the sustainability of the EEF and ROT-T distributions for each device, to find an optimum design. Analyses of the devices prove that utilization of the 3D electrodes eliminate irregularities of EEF and ROT-T along the z-axis. The Results show that triangular electrodes provide the highest sustainability for the in-plane ROT-T and EEF distribution, while the oblate elliptical and circular electrodes have the lowest variances along the z-axis. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
Real-time PCR probe optimization using design of experiments approach.
Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F
2016-03-01
Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.
Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions
1992-09-01
translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yanping Guo; Abhishek Yadav; Tanju Karanfil
Adsorption of trichloroethylene (TCE) and atrazine, two synthetic organic contaminants (SOCs) having different optimum adsorption pore regions, by four activated carbons and an activated carbon fiber (ACF) was examined. Adsorbents included two coconut-shell based granular activated carbons (GACs), two coal-based GACs (F400 and HD4000) and a phenol formaldehyde-based activated carbon fiber. The selected adsorbents had a wide range of pore size distributions but similar surface acidity and hydrophobicity. Single solute and preloading (with a dissolved organic matter (DOM)) isotherms were performed. Single solute adsorption results showed that (i) the adsorbents having higher amounts of pores with sizes about the dimensionsmore » of the adsorbate molecules exhibited higher uptakes, (ii) there were some pore structure characteristics, which were not completely captured by pore size distribution analysis, that also affected the adsorption, and (iii) the BET surface area and total pore volume were not the primary factors controlling the adsorption of SOCs. The preloading isotherm results showed that for TCE adsorbing primarily in pores <10 {angstrom}, the highly microporous ACF and GACs, acting like molecular sieves, exhibited the highest uptakes. For atrazine with an optimum adsorption pore region of 10-20 {angstrom}, which overlaps with the adsorption region of some DOM components, the GACs with a broad pore size distribution and high pore volumes in the 10-20 {angstrom} region had the least impact of DOM on the adsorption. 25 refs., 3 figs., 3 tabs.« less
Theoretical and subjective bit assignments in transform picture
NASA Technical Reports Server (NTRS)
Jones, H. W., Jr.
1977-01-01
It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.
The Partitioning of Triclosan between Aqueous and Particulate Phases in the Hudson River Estuary
The distribution of Triclosan within the Hudson River Estuary can be explained by a balance among the overall effluent inputs from municipal sewage treatment facilities, dilution of Triclosan concentrations in the water column with freshwater and seawater inputs, removal of Tricl...
VERA and VERA-EDU 3.5 Release Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sieger, Matt; Salko, Robert K.; Kochunas, Brendan M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms. Neutronics analysis can be performed for 2D lattices, 2D core and 3D core problems for pressurized water reactor geometries that can be used to calculate criticality and fission rate distributions by pin for input fuel compositions. MPACT uses the Method of Characteristics transport approach for 2D problems.more » For 3D problems, MPACT uses the 2D/1D method which uses 2D MOC in a radial plane and diffusion or SPn in the axial direction. MPACT includes integrated cross section capabilities that provide problem-specific cross sections generated using the subgroup methodology. The code can be executed both 2D and 3D problems in parallel to reduce overall run time. A thermal-hydraulics capability is provided with CTF (an updated version of COBRA-TF) that allows thermal-hydraulics analyses for single and multiple assemblies using the simplified VERA common input. This distribution also includes coupled neutronics/thermal-hydraulics capabilities to allow calculations using MPACT coupled with CTF. The VERA fuel rod performance component BISON calculates, on a 2D or 3D basis, fuel rod temperature, fuel rod internal pressure, free gas volume, clad integrity and fuel rod waterside diameter. These capabilities allow simulation of power cycling, fuel conditioning and deconditioning, high burnup performance, power uprate scoping studies, and accident performance. Input/Output capabilities include the VERA Common Input (VERAIn) script which converts the ASCII common input file to the intermediate XML used to drive all of the physics codes in the VERA Core Simulator (VERA-CS). VERA component codes either input the VERA XML format directly, or provide a preprocessor which can convert the XML into native input. VERAView is an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The python-based software is easy to install and intuitive to use, and provides instantaneous 2D and 3D images, 1D plots, and alpha-numeric data from VERA multi-physics simulations. Testing within CASL has focused primarily on Westinghouse four-loop reactor geometries and conditions with example problems included in the distribution.« less
Arefi-Oskoui, Samira; Khataee, Alireza; Vatanpour, Vahid
2017-07-10
In this research, MgAl-CO 3 2- nanolayered double hydroxide (NLDH) was synthesized through a facile coprecipitation method, followed by a hydrothermal treatment. The prepared NLDHs were used as a hydrophilic nanofiller for improving the performance of the PVDF-based ultrafiltration membranes. The main objective of this research was to obtain the optimized formula of NLDH/PVDF nanocomposite membrane presenting the best performance using computational techniques as a cost-effective method. For this aim, an artificial neural network (ANN) model was developed for modeling and expressing the relationship between the performance of the nanocomposite membrane (pure water flux, protein flux and flux recovery ratio) and the affecting parameters including the NLDH, PVP 29000 and polymer concentrations. The effects of the mentioned parameters and the interaction between the parameters were investigated using the contour plot predicted with the developed model. Scanning electron microscopy (SEM), atomic force microscopy (AFM), and water contact angle techniques were applied to characterize the nanocomposite membranes and to interpret the predictions of the ANN model. The developed ANN model was introduced to genetic algorithm (GA) as a bioinspired optimizer to determine the optimum values of input parameters leading to high pure water flux, protein flux, and flux recovery ratio. The optimum values for NLDH, PVP 29000 and the PVDF concentration were determined to be 0.54, 1, and 18 wt %, respectively. The performance of the nanocomposite membrane prepared using the optimum values proposed by GA was investigated experimentally, in which the results were in good agreement with the values predicted by ANN model with error lower than 6%. This good agreement confirmed that the nanocomposite membranes prformance could be successfully modeled and optimized by ANN-GA system.
NASA Astrophysics Data System (ADS)
Tahir, Abdullah Mohd; Lair, Noor Ajian Mohd; Wei, Foo Jun
2018-05-01
The Shielded Metal Arc Welding (SMAW) is (or the Stick welding) defined as a welding process, which melts and joins metals with an arc between a welding filler (electrode rod) and the workpieces. The main objective was to study the mechanical properties of welded metal under different types of welding fillers and current for SMAW. This project utilized the Design of Experiment (DOE) by adopting the Full Factorial Design. The independent variables were the types of welding filler and welding current, whereas the other welding parameters were fixed at the optimum value. The levels for types of welding filler were by the models of welding filler (E6013, E7016 and E7018) used and the levels for welding current were 80A and 90A. The responses were the mechanical properties of welded material, which include tensile strength and hardness. The experiment was analyzed using the two way ANOVA. The results prove that there are significant effects of welding filler types and current levels on the tensile strength and hardness of the welded metal. At the same time, the ANOVA results and interaction plot indicate that there are significant interactions between the welding filler types and the welding current on both the hardness and tensile strength of the welded metals, which has never been reported before. This project found that when the amount of heat input with increase, the mechanical properties such as tensile strength and hardness decrease. The optimum tensile strength for welded metal is produced by the welding filler E7016 and the optimum of hardness of welded metal is produced by the welding filler E7018 at welding current of 80A.
Delmas, Henri; Le, Ngoc Tuan; Barthe, Laurie; Julcour-Lebigue, Carine
2015-07-01
This work aims at investigating for the first time the key sonication (US) parameters: power density (DUS), intensity (IUS), and frequency (FS) - down to audible range, under varied hydrostatic pressure (Ph) and low temperature isothermal conditions (to avoid any thermal effect). The selected application was activated sludge disintegration, a major industrial US process. For a rational approach all comparisons were made at same specific energy input (ES, US energy per solid weight) which is also the relevant economic criterion. The decoupling of power density and intensity was obtained by either changing the sludge volume or most often by changing probe diameter, all other characteristics being unchanged. Comprehensive results were obtained by varying the hydrostatic pressure at given power density and intensity. In all cases marked maxima of sludge disintegration appeared at optimum pressures, which values increased at increasing power intensity and density. Such optimum was expected due to opposite effects of increasing hydrostatic pressure: higher cavitation threshold then smaller and fewer bubbles, but higher temperature and pressure at the end of collapse. In addition the first attempt to lower US frequency down to audible range was very successful: at any operation condition (DUS, IUS, Ph, sludge concentration and type) higher sludge disintegration was obtained at 12 kHz than at 20 kHz. The same values of optimum pressure were observed at 12 and 20 kHz. At same energy consumption the best conditions - obtained at 12 kHz, maximum power density 720 W/L and 3.25 bar - provided about 100% improvement with respect to usual conditions (1 bar, 20 kHz). Important energy savings and equipment size reduction may then be expected. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimum Laser Beam Characteristics for Achieving Smoother Ablations in Laser Vision Correction.
Verma, Shwetabh; Hesser, Juergen; Arba-Mosquera, Samuel
2017-04-01
Controversial opinions exist regarding optimum laser beam characteristics for achieving smoother ablations in laser-based vision correction. The purpose of the study was to outline a rigorous simulation model for simulating shot-by-shot ablation process. The impact of laser beam characteristics like super Gaussian order, truncation radius, spot geometry, spot overlap, and lattice geometry were tested on ablation smoothness. Given the super Gaussian order, the theoretical beam profile was determined following Lambert-Beer model. The intensity beam profile originating from an excimer laser was measured with a beam profiler camera. For both, the measured and theoretical beam profiles, two spot geometries (round and square spots) were considered, and two types of lattices (reticular and triangular) were simulated with varying spot overlaps and ablated material (cornea or polymethylmethacrylate [PMMA]). The roughness in ablation was determined by the root-mean-square per square root of layer depth. Truncating the beam profile increases the roughness in ablation, Gaussian profiles theoretically result in smoother ablations, round spot geometries produce lower roughness in ablation compared to square geometry, triangular lattices theoretically produce lower roughness in ablation compared to the reticular lattice, theoretically modeled beam profiles show lower roughness in ablation compared to the measured beam profile, and the simulated roughness in ablation on PMMA tends to be lower than on human cornea. For given input parameters, proper optimum parameters for minimizing the roughness have been found. Theoretically, the proposed model can be used for achieving smoothness with laser systems used for ablation processes at relatively low cost. This model may improve the quality of results and could be directly applied for improving postoperative surface quality.
iSEDfit: Bayesian spectral energy distribution modeling of galaxies
NASA Astrophysics Data System (ADS)
Moustakas, John
2017-08-01
iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone. After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.
Particle identification with neural networks using a rotational invariant moment representation
NASA Astrophysics Data System (ADS)
Sinkus, Ralph; Voss, Thomas
1997-02-01
A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.
Competition and Cooperation of Distributed Generation and Power System
NASA Astrophysics Data System (ADS)
Miyake, Masatoshi; Nanahara, Toshiya
Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitpas, Guillaume; Whitesides, Russel
UQHCCI_2 propagates the uncertainties of mass-average quantities (temperature, heat capacity ratio) and the output performances (IMEP, heat release, CA50 and RI) of a HCCI engine test bench using the pressure trace, and intake and exhaust molar fraction and IVC temperature distributions, as inputs (those inputs may be computed using another code UQHCCI_2, or entered independently).
Distributional Effects and Individual Differences in L2 Morphology Learning
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kwoka, Nicole; Kempe, Vera
2017-01-01
Second language (L2) learning outcomes may depend on the structure of the input and learners' cognitive abilities. This study tested whether less predictable input might facilitate learning and generalization of L2 morphology while evaluating contributions of statistical learning ability, nonverbal intelligence, phonological short-term memory, and…
NASA Astrophysics Data System (ADS)
Miyake, Y.; Usui, H.; Kojima, H.
2010-12-01
In tenuous space plasma environment, photoelectrons emitted due to solar illumination produce a high-density photoelectron cloud localized in the vicinity of a spacecraft body and an electric field sensor. The photoelectron current emitted from the sensor has also received considerable attention because it becomes a primary factor in determining floating potentials of the sunlit spacecraft and sensor bodies. Considering the fact that asymmetric photoelectron distribution between sunlit and sunless sides of the spacecraft occasionally causes a spurious sunward electric field, we require quantitative evaluation of the photoelectron distribution around the spacecraft and its influence on electric field measurements by means of a numerical approach. In the current study, we applied the Particle-in-Cell plasma simulation to the analysis of the photoelectron environment around spacecraft. By using the PIC modeling, we can self-consistently consider the plasma kinetics. This enables us to simulate the formation of the photoelectron cloud as well as the spacecraft and sensor charging in a self-consistent manner. We report the progress of an analysis on photoelectron environment around MEFISTO, which is an electric field instrument for the BepiColombo/MMO spacecraft to Mercury’s magnetosphere. The photoelectron guard electrode is a key technology for ensuring an optimum photoelectron environment. We show some simulation results on the guard electrode effects on surrounding photoelectrons and discuss a guard operation condition for producing the optimum photoelectron environment. We also deal with another important issue, that is, how the guard electrode can mitigate an undesirable influence of an asymmetric photoelectron distribution on electric field measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, F.T.; Young, M.L.; Miller, L.A.
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Stretchable Conductive Elastomers for Soldier Biosensing Applications: Final Report
2016-03-01
public release; distribution is unlimited. 7 the electrical impedance tunability that we required. Representative data for resistance versus volume...Technology Directorate’s (VTD) electric field mediated morphing wing research effort. Fig. 5 Resistance values of EEG electrodes as a function of...extend the resistance range of the developed polymer EEG electrodes to potentially provide insight into defining an optimum electrical performance for
NASA Astrophysics Data System (ADS)
Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang
2018-05-01
In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
Reliability Based Design for a Raked Wing Tip of an Airframe
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2011-01-01
A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Berg, H C; Purcell, E M
1977-01-01
Statistical fluctuations limit the precision with which a microorganism can, in a given time T, determine the concentration of a chemoattractant in the surrounding medium. The best a cell can do is to monitor continually the state of occupation of receptors distributed over its surface. For nearly optimum performance only a small fraction of the surface need be specifically adsorbing. The probability that a molecule that has collided with the cell will find a receptor is Ns/(Ns + pi a), if N receptors, each with a binding site of radius s, are evenly distributed over a cell of radius a. There is ample room for many indenpendent systems of specific receptors. The adsorption rate for molecules of moderate size cannot be significantly enhanced by motion of the cell or by stirring of the medium by the cell. The least fractional error attainable in the determination of a concentration c is approximately (TcaD) - 1/2, where D is diffusion constant of the attractant. The number of specific receptors needed to attain such precision is about a/s. Data on bacteriophage absorption, bacterial chemotaxis, and chemotaxis in a cellular slime mold are evaluated. The chemotactic sensitivity of Escherichia coli approaches that of the cell of optimum design. PMID:911982
Geoscience technology application to optimize field development, Seligi Field, Malay Basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, M.S.; Wiggins, B.D.
1994-07-01
Integration of well log, core, 3-D seismic, and engineering data within a sequence stratigraphic framework, has enabled prediction of reservoir distribution and optimum development of Seligi field. Seligi is the largest field in the Malay Basin, with half of the reserves within lower Miocene Group J reservoirs. These reservoirs consist of shallow marine sandstones and estuarine sandstones predominantly within an incised valley. Variation in reservoir quality has been a major challenge in developing Seligi. Recognizing and mapping four sequences within the Group J incised valley fill has resulted in a geologic model for predicting the distribution of good quality estuarinemore » reservoir units and intercalated low-permeability sand/shale units deposited during marine transgressions. These low-permeability units segregate the reservoir fluids, causing differential contact movement in response to production thus impacting completion strategy and well placement. Seismic calibration shows that a large impedance contrast exists between the low-permeability rock and adjacent good quality oil sand. Application of sequence stratigraphic/facies analysis coupled with the ability to identify the low-permeability units seismically is enabling optimum development of each of the four sequences at Seligi.« less
NASA Astrophysics Data System (ADS)
Gajda, Jerzy K.; Niesterowicz, Andrzej; Mazurkiewicz, Henryk
1995-03-01
A high number of osseous diseases, particularly of the backbone and hip-joint regions, result in a need for their overall treatment and prevention. Two basic treatment methods are used: physical exercises at an early stage of the illness, and surgical treatment in an advanced stage. Recently, in operational treatment of coxarthrosis the elements of the joint (acetabulum and capitellum) were replaced by their artificial counterparts, despite some drawbacks and unknowns related to this kind of treatment. In order to check the effectiveness of this treatment and to eliminate its drawbacks we have tested the joint by means of speckle photography method. The objective of this paper is an attempt to evaluate stress and displacement distributions in a system consisting of artificial acetabulum and capitellum and a natural bone in order to determine an optimum fitting of artificial acetabulum and capitellum and a natural bone in order to determine an optimum fitting of artificial elements that guarantees uniform distribution of stresses corresponding to anatomical and physiological parameters of the hip-joint. Speckle photographs have been analyzed point by point with the help of the algorithm for striped images processing.
NASA Astrophysics Data System (ADS)
Bodin, P.; Olin, S.; Pugh, T. A. M.; Arneth, A.
2014-12-01
Food security can be defined as stable access to food of good nutritional quality. In Sub Saharan Africa access to food is strongly linked to local food production and the capacity to generate enough calories to sustain the local population. Therefore it is important in these regions to generate not only sufficiently high yields but also to reduce interannual variability in food production. Traditionally, climate impact simulation studies have focused on factors that underlie maximum productivity ignoring the variability in yield. By using Modern Portfolio Theory, a method stemming from economics, we here calculate optimum current and future crop selection that maintain current yield while minimizing variance, vs. maintaining variance while maximizing yield. Based on simulated yield using the LPJ-GUESS dynamic vegetation model, the results show that current cropland distribution for many crops is close to these optimum distributions. Even so, the optimizations displayed substantial potential to either increase food production and/or to decrease its variance regionally. Our approach can also be seen as a method to create future scenarios for the sown areas of crops in regions where local food production is important for food security.
Climate applications for NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature
NASA Astrophysics Data System (ADS)
Boyer, T.; Banzon, P. V. F.; Liu, G.; Saha, K.; Wilson, C.; Stachniewicz, J. S.
2015-12-01
Few sea surface temperature (SST) datasets from satellites have the long temporal span needed for climate studies. The NOAA Daily Optimum Interpolation Sea Surface Temperature (DOISST) on a 1/4° grid, produced at National Centers for Environmental Information, is based primarily on SSTs from the Advanced Very High Resolution Radiometer (AVHRR), available from 1981 to the present. AVHRR data can contain biases, particularly when aerosols are present. Over the three decade span, the largest departure of AVHRR SSTs from buoy temperatures occurred during the Mt Pinatubo and El Chichon eruptions. Therefore, in DOISST, AVHRR SSTs are bias-adjusted to match in situ SSTs prior to interpolation. This produces a consistent time series of complete SST fields that is suitable for modelling and investigating local climate phenomena like El Nino or the Pacific warm blob in a long term context. Because many biological processes and animal distributions are temperature dependent, there are also many ecological uses of DOISST (e.g., coral bleaching thermal stress, fish and marine mammal distributions), thereby providing insights into resource management in a changing ocean. The advantages and limitations of using DOISST for different applications will be discussed.
Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.
Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo
2013-06-20
A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.
NASA Astrophysics Data System (ADS)
Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.
2015-03-01
The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.
Formation of propagation invariant laser beams with anamorphic optical systems
NASA Astrophysics Data System (ADS)
Soskind, Y. G.
2015-03-01
Propagation invariant structured laser beams play an important role in several photonics applications. A majority of propagation invariant beams are usually produced in the form of laser modes emanating from stable laser cavities. This work shows that anamorphic optical systems can be effectively employed to transform input propagation invariant laser beams and produce a variety of alternative propagation invariant structured laser beam distributions with different shapes and phase structures. This work also presents several types of anamorphic lens systems suitable for transforming the input laser modes into a variety of structured propagation invariant beams. The transformations are applied to different laser mode types, including Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian field distributions. The influence of the relative azimuthal orientation between the input laser modes and the anamorphic optical systems on the resulting transformed propagation invariant beams is presented as well.
Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip.
Lentine, A L; Reiley, D J; Novotny, R A; Morrison, R L; Sasian, J M; Beckman, M G; Buchholz, D B; Hinterlong, S J; Cloonan, T J; Richards, G W; McCormick, F B
1997-03-10
We describe a new optoelectronic switching system demonstration that implements part of the distribution fabric for a large asynchronous transfer mode (ATM) switch. The system uses a single optoelectronic VLSI modulator-based switching chip with more than 4000 optical input-outputs. The optical system images the input fibers from a two-dimensional fiber bundle onto this chip. A new optomechanical design allows the system to be mounted in a standard electronic equipment frame. A large section of the switch was operated as a 208-Mbits/s time-multiplexed space switch, which can serve as part of an ATM switch by use of an appropriate out-of-band controller. A larger section with 896 input light beams and 256 output beams was operated at 160 Mbits/s as a slowly reconfigurable space switch.
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Wrapping Python around MODFLOW/MT3DMS based groundwater models
NASA Astrophysics Data System (ADS)
Post, V.
2008-12-01
Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.
Coaxial prime focus feeds for paraboloidal reflectors
NASA Technical Reports Server (NTRS)
Collin, R. E.; Schilling, H.; Hebert, L.
1982-01-01
A TE11 - TM11 dual mode coaxial feed for use in prime focus paraboloidal antenna systems is investigated. The scattering matrix parameters of the internal bifurcation junction was determined by the residue calculus technique. The scattering parameters and radiation fields of the aperture were found from the Weinstein solution. The optimum modeing ratio for minimum cross-polarization was determined along with the corresponding optimum feed dimensions. A peak cross-polarization level of -58 dB is predicted. The frequency characteristics were also investigated and a bandwidth of 5% is predicted over which the cross-polarization remains below -30 dB, the input VSWR is below 1.15, and the phase error is less than 10 deg. Theoretical radiation patterns and efficiency curves for a paraboloidal reflector illuminated by this feed were computed. The predicted sidelobe level is below -30 dB and aperture efficiencies greater than 70% are possible. Experimental results are also presented that substantiates the theoretical results. In addition, experimental results for a 'short-cup' coaxial feed are given. The report includes extensive design data for the dual-mode feed along with performance curves showing cross-polarization as a function of feed parameters. The feed is useful for low-cost ground based receiving antennas for use in direct television satellite broadcasting service.
OFF-DESIGN PERFORMANCE OF RADIAL INFLOW TURBINES
NASA Technical Reports Server (NTRS)
Wasserbauer, C. A.
1994-01-01
This program calculates off design performance of radial inflow turbines. The program uses a one dimensional solution of flow conditions through the turbine along the main streamline. The loss model accounts for stator, rotor, incidence, and exit losses. Program features include consideration of stator and rotor trailing edge blockage and computation of performance to limiting load. Stator loss (loss in kinetic energy across the stator) is proportional to the average kinetic energy in the blade row and is represented in the program by an equation which includes a stator loss coefficient determined from design point performance and then assumed to be constant for the off design calculations. Minimum incidence loss does not occur at zero incidence angle with respect to the rotor blade, but at some optimum flow angle. At high pressure ratios the level of rotor inlet velocity seemed to have an excessive influence on the loss. Using the component of velocity in the direction of the optimum flow angle gave better correlations with experimental results. Overall turbine geometry and design point values of efficiency, pressure ratio, and mass flow are needed as input information. The output includes performance and velocity diagram parameters for any number of given speeds over a range of turbine pressure ratio. The program has been implemented on the IBM 7094 and operates in batch mode.
NASA Astrophysics Data System (ADS)
van der Ploeg, R.; Selby, D. S.; Cramwinckel, M.; Bohaty, S. M.; Sluijs, A.; Middelburg, J. J.
2016-12-01
The Middle Eocene Climatic Optimum (MECO) represents a 500 kyr period of global warming 40 million years ago associated with a rise in atmospheric CO2 concentrations, but its cause remains enigmatic. Moreover, on the timescale of the MECO, an increase in silicate weathering rates on the continents is expected to balance carbon input and restore the alkalinity of the oceans, but this is in sharp disagreement with observations of extensive carbonate dissolution. Here we show, based on osmium isotope ratios of marine sediments from three different sites, that CO2 rise and warming did not lead to enhanced continental weathering during the MECO, in contrast to expectations from carbon cycle theory. Remarkably, a minor shift to lower, more unradiogenic osmium isotope ratios rather indicates an episode of increased volcanism or reduced continental weathering. This disproves silicate weathering as a geologically constant feedback to CO2 variations. Rather, we suggest that global Early and Middle Eocene warmth diminished the weatherability of continental rocks, ultimately leading to CO2 accumulation during the MECO, and show the plausibility of this scenario using carbon cycle modeling simulations. We surmise a dynamic weathering feedback might explain multiple enigmatic phases of coupled climate and carbon cycle change in the Cretaceous and Cenozoic.
NASA Astrophysics Data System (ADS)
Lertwiram, Namzilp; Tran, Gia Khanh; Mizutani, Keiichi; Sakaguchi, Kei; Araki, Kiyomichi
Setting relays can address the shadowing problem between a transmitter (Tx) and a receiver (Rx). Moreover, the Multiple-Input Multiple-Output (MIMO) technique has been introduced to improve wireless link capacity. The MIMO technique can be applied in relay network to enhance system performance. However, the efficiency of relaying schemes and relay placement have not been well investigated with experiment-based study. This paper provides a propagation measurement campaign of a MIMO two-hop relay network in 5GHz band in an L-shaped corridor environment with various relay locations. Furthermore, this paper proposes a Relay Placement Estimation (RPE) scheme to identify the optimum relay location, i.e. the point at which the network performance is highest. Analysis results of channel capacity show that relaying technique is beneficial over direct transmission in strong shadowing environment while it is ineffective in non-shadowing environment. In addition, the optimum relay location estimated with the RPE scheme also agrees with the location where the network achieves the highest performance as identified by network capacity. Finally, the capacity analysis shows that two-way MIMO relay employing network coding has the best performance while cooperative relaying scheme is not effective due to shadowing effect weakening the signal strength of the direct link.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
Diet shift of lentic dragonfly larvae in response to reduced terrestrial prey subsidies
Kraus, Johanna M.
2010-01-01
Inputs of terrestrial plant detritus and nutrients play an important role in aquatic food webs, but the importance of terrestrial prey inputs in determining aquatic predator distribution and abundance has been appreciated only recently. I examined the numerical, biomass, and diet responses of a common predator, dragonfly larvae, to experimental reduction of terrestrial arthropod input into ponds. I distributed paired enclosures (n = 7), one with a screen between the land and water (reduced subsidy) and one without a screen (ambient subsidy), near the shoreline of 2 small fishless ponds and sampled each month during the growing season in the southern Appalachian Mountains, Virginia (USA). Screens between water and land reduced the number of terrestrial arthropods that fell into screened enclosures relative to the number that fell into unscreened enclosures and open reference plots by 36%. The δ13C isotopic signatures of dragonfly larvae shifted towards those of aquatic prey in reduced-subsidy enclosures, a result suggesting that dragonflies consumed fewer terrestrial prey when fewer were available (ambient subsidy: 30%, reduced subsidy: 19% of diet). Overall abundance and biomass of dragonfly larvae did not change in response to reduced terrestrial arthropod inputs, despite the fact that enclosures permitted immigration/emigration. These results suggest that terrestrial arthropods can provide resources to aquatic predators in lentic systems, but that their effects on abundance and distribution might be subtle and confounded by in situ factors.
Optimization and design of pigments for heat-insulating coatings
NASA Astrophysics Data System (ADS)
Wang, Guang-Hai; Zhang, Yue
2010-12-01
This paper reports that heat insulating property of infrared reflective coatings is obtained through the use of pigments which diffuse near-infrared thermal radiation. Suitable structure and size distribution of pigments would attain maximum diffuse infrared radiation and reduce the pigment volume concentration required. The optimum structure and size range of pigments for reflective infrared coatings are studied by using Kubelka—Munk theory, Mie model and independent scattering approximation. Taking titania particle as the pigment embedded in an inorganic coating, the computational results show that core-shell particles present excellent scattering ability, more so than solid and hollow spherical particles. The optimum radius range of core-shell particles is around 0.3 ~ 1.6 μm. Furthermore, the influence of shell thickness on optical parameters of the coating is also obvious and the optimal thickness of shell is 100-300 nm.
New constraints in absorptive capacity and the optimum rate of petroleum output
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Mallakh, R
1980-01-01
Economic policy in four oil-producing countries is analyzed within a framework that combines a qualitative assessment of the policy-making process with an empirical formulation based on historical and current trends in these countries. The concept of absorptive capacity is used to analyze the optimum rates of petroleum production in Iran, Iraq, Saudi Arabia, and Kuwait. A control solution with an econometric model is developed which is then modified for alternative development strategies based on analysis of factors influencing production decisions. The study shows the consistencies and inconsistencies between the goals of economic growth, oil production, and exports, and the constraintsmore » on economic development. Simulation experiments incorporated a number of the constraints on absorptive capacity. Impact of other constraints such as income distribution and political stability is considered qualitatively. (DLC)« less
Fleet Sizing of Automated Material Handling Using Simulation Approach
NASA Astrophysics Data System (ADS)
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
Zadpoor, Amir A
2015-03-01
Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.
Audio distribution and Monitoring Circuit
NASA Technical Reports Server (NTRS)
Kirkland, J. M.
1983-01-01
Versatile circuit accepts and distributes TV audio signals. Three-meter audio distribution and monitoring circuit provides flexibility in monitoring, mixing, and distributing audio inputs and outputs at various signal and impedance levels. Program material is simultaneously monitored on three channels, or single-channel version built to monitor transmitted or received signal levels, drive speakers, interface to building communications, and drive long-line circuits.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)
1998-01-01
A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.
Comparative study of pulsed Nd:YAG laser welding of AISI 304 and AISI 316 stainless steels
NASA Astrophysics Data System (ADS)
Kumar, Nikhil; Mukherjee, Manidipto; Bandyopadhyay, Asish
2017-02-01
Laser welding is a potentially useful technique for joining two pieces of similar or dissimilar materials with high precision. In the present work, comparative studies on laser welding of similar metal of AISI 304SS and AISI 316SS have been conducted forming butt joints. A robotic control 600 W pulsed Nd:YAG laser source has been used for welding purpose. The effects of laser power, scanning speed and pulse width on the ultimate tensile strength and weld width have been investigated using the empirical models developed by RSM. The results of ANOVA indicate that the developed models predict the responses adequately within the limits of input parameters. 3-D response surface and contour plots have been developed to find out the combined effects of input parameters on responses. Furthermore, microstructural analysis as well as hardness and tensile behavior of the selected weld of 304SS and 316SS have been carried out to understand the metallurgical and mechanical behavior of the weld. The selection criteria are based on the maximum and minimum strength achieved by the respective weld. It has been observed that the current pulsation, base metal composition and variation in heat input have significant influence on controlling the microstructural constituents (i.e. phase fraction, grain size etc.). The result suggests that the low energy input pulsation generally produce fine grain structure and improved mechanical properties than the high energy input pulsation irrespective of base material composition. However, among the base materials, 304SS depict better microstructural and mechanical properties than the 316SS for a given parametric condition. Finally, desirability function analysis has been applied for multi-objective optimization for maximization of ultimate tensile strength and minimization of weld width simultaneously. Confirmatory tests have been conducted at optimum parametric conditions to validate the optimization techniques.
Torshabi, Ahmad Esmaili; Nankali, Saber
2016-01-01
In external beam radiotherapy, one of the most common and reliable methods for patient geometrical setup and/or predicting the tumor location is use of external markers. In this study, the main challenging issue is increasing the accuracy of patient setup by investigating external markers location. Since the location of each external marker may yield different patient setup accuracy, it is important to assess different locations of external markers using appropriate selective algorithms. To do this, two commercially available algorithms entitled a) canonical correlation analysis (CCA) and b) principal component analysis (PCA) were proposed as input selection algorithms. They work on the basis of maximum correlation coefficient and minimum variance between given datasets. The proposed input selection algorithms work in combination with an adaptive neuro‐fuzzy inference system (ANFIS) as a correlation model to give patient positioning information as output. Our proposed algorithms provide input file of ANFIS correlation model accurately. The required dataset for this study was prepared by means of a NURBS‐based 4D XCAT anthropomorphic phantom that can model the shape and structure of complex organs in human body along with motion information of dynamic organs. Moreover, a database of four real patients undergoing radiation therapy for lung cancers was utilized in this study for validation of proposed strategy. Final analyzed results demonstrate that input selection algorithms can reasonably select specific external markers from those areas of the thorax region where root mean square error (RMSE) of ANFIS model has minimum values at that given area. It is also found that the selected marker locations lie closely in those areas where surface point motion has a large amplitude and a high correlation. PACS number(s): 87.55.km, 87.55.N PMID:27929479
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
Dailey, James; Agarwal, Anjali; Toliver, Paul; ...
2015-11-12
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.
Particle identification with neural networks using a rotational invariant moment representation
NASA Astrophysics Data System (ADS)
Sinkus, R.; Voss, T.
1997-02-01
A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dailey, James; Agarwal, Anjali; Toliver, Paul
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.
Learning Vowel Categories from Maternal Speech in Gurindji Kriol
ERIC Educational Resources Information Center
Jones, Caroline; Meakins, Felicity; Muawiyath, Shujau
2012-01-01
Distributional learning is a proposal for how infants might learn early speech sound categories from acoustic input before they know many words. When categories in the input differ greatly in relative frequency and overlap in acoustic space, research in bilingual development suggests that this affects the course of development. In the present…
Security evaluation of the quantum key distribution system with two-mode squeezed states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osaki, M.; Ban, M.
2003-08-01
The quantum key distribution (QKD) system with two-mode squeezed states has been demonstrated by Pereira et al. [Phys. Rev. A 62, 042311 (2000)]. They evaluate the security of the system based on the signal to noise ratio attained by a homodyne detector. In this paper, we discuss its security based on the error probability individually attacked by eavesdropper with the unambiguous or the error optimum detection. The influence of the energy loss at transmission channels is also taken into account. It will be shown that the QKD system is secure under these conditions.
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Regional climate model downscaling may improve the prediction of alien plant species distributions
NASA Astrophysics Data System (ADS)
Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.
2014-12-01
Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.
Yan, Zheng-Yu; Du, Qing-Qing; Qian, Jing; Wan, Dong-Yu; Wu, Sheng-Mei
2017-01-01
In the paper, a green and efficient biosynthetical technique was reported for preparing cadmium sulfide (CdS) quantum dots, in which Escherichia coli (E. coli) was chosen as a biomatrix. Fluorescence emission spectra and fluorescent microscopic photographs revealed that as-produced CdS quantum dots had an optimum fluorescence emission peak located at 470nm and emitted a blue-green fluorescence under ultraviolet excitation. After extracted from bacterial cells and located the nanocrystals' foci in vivo, the CdS quantum dots showed a uniform size distribution by transmission electron microscope. Through the systematical investigation of the biosynthetic conditions, including culture medium replacement, input time point of cadmium source, working concentrations of raw inorganic ions, and co-cultured time spans of bacteria and metal ions in the bio-manufacture, the results revealed that CdS quantum dots with the strongest fluorescence emission were successfully prepared when E. coli cells were in stationary phase, with the replacement of culture medium and following the incubation with 1.0×10 -3 mol/L cadmium source for 2 days. Results of antimicrobial susceptibility testing indicated that the sensitivities to eight types of antibiotics of E. coli were barely changed before and after CdS quantum dots were prepared in the mild temperature environment, though a slight fall of antibiotic resistance could be observed, suggesting hinted the proposed technique of producing quantum dots is a promising environmentally low-risk protocol. Copyright © 2016 Elsevier Inc. All rights reserved.
Validating Inertial Confinement Fusion (ICF) predictive capability using perturbed capsules
NASA Astrophysics Data System (ADS)
Schmitt, Mark; Magelssen, Glenn; Tregillis, Ian; Hsu, Scott; Bradley, Paul; Dodd, Evan; Cobble, James; Flippo, Kirk; Offerman, Dustin; Obrey, Kimberly; Wang, Yi-Ming; Watt, Robert; Wilke, Mark; Wysocki, Frederick; Batha, Steven
2009-11-01
Achieving ignition on NIF is a monumental step on the path toward utilizing fusion as a controlled energy source. Obtaining robust ignition requires accurate ICF models to predict the degradation of ignition caused by heterogeneities in capsule construction and irradiation. LANL has embarked on a project to induce controlled defects in capsules to validate our ability to predict their effects on fusion burn. These efforts include the validation of feature-driven hydrodynamics and mix in a convergent geometry. This capability is needed to determine the performance of capsules imploded under less-than-optimum conditions on future IFE facilities. LANL's recently initiated Defect Implosion Experiments (DIME) conducted at Rochester's Omega facility are providing input for these efforts. Recent simulation and experimental results will be shown.
NASA Astrophysics Data System (ADS)
Takahashi, Tsuyoshi; Sato, Masaru; Nakasha, Yasuhiro; Hara, Naoki
2012-09-01
Backward diodes consisting of a heterojunction of p-GaAs0.51Sb0.49/n-InP, which was lattice matched to an InP substrate, were fabricated for the first time and investigated for their characteristics. The lattice-matched heterojunction is effective in preventing surface defects after crystal growth of the diodes. The backward diodes indicated a curvature coefficient of -17.6 V-1, which is sufficiently large for zero-bias operation. Voltage sensitivity of 338 V/W was obtained at 94 GHz by use of the circular mesa diode of 2.0 µm diameter. Optimum voltage sensitivity of 1603 V/W was estimated when the input impedance was completely matched with the diodes.
Analysis and design of insulation systems for LH2-fueled aircraft
NASA Technical Reports Server (NTRS)
Cunnington, G. R., Jr.
1979-01-01
An analytical program was conducted to evaluate the performance of 15 potential insulations for the fuel tanks of a subsonic LH2-fueled transport aircraft intended for airline service in the 1990-1995 time period. As a result, two candidate insulation systems are proposed for subsonic transport aircraft applications. Both candidates are judged to be the optimum available and should meet the design requirements. However, because of the long-life cyclic nature of the application and the cost sensitivity of airline operations, an experimental tank/insulation development or proof-of-concept program is recommended. This program should be carried out with a nearly full-scale system which would be subjected to the cyclic thermal and mechanical inputs anticipated in aircraft service.
NASA Astrophysics Data System (ADS)
Galizzi, Gustavo E.; Cuadrado-Laborde, Christian
2015-10-01
In this work we study the joint transform correlator setup, finding two analytical expressions for the extensions of the joint power spectrum and its inverse Fourier transform. We found that an optimum efficiency is reached, when the bandwidth of the key code is equal to the sum of the bandwidths of the image plus the random phase mask (RPM). The quality of the decryption is also affected by the ratio between the bandwidths of the RPM and the input image, being better as this ratio increases. In addition, the effect on the decrypted image when the detection area is lower than the encrypted signal extension was analyzed. We illustrate these results through several numerical examples.
NASA Technical Reports Server (NTRS)
1983-01-01
A profile of altitude, airspeed, and flight path angle as a function of range between a given set of origin and destination points for particular models of transport aircraft provided by NASA is generated. Inputs to the program include the vertical wind profile, the aircraft takeoff weight, the costs of time and fuel, certain constraint parameters and control flags. The profile can be near optimum in the sense of minimizing: (1) fuel, (2) time, or (3) a combination of fuel and time (direct operating cost (DOC)). The user can also, as an option, specify the length of time the flight is to span. The theory behind the technical details of this program is also presented.
NASA Astrophysics Data System (ADS)
Yoshida, Tetsuya; Maekawa, Keiichi; Tsuda, Shibun; Shimizu, Tatsuo; Ogasawara, Makoto; Aono, Hideki; Yamaguchi, Yasuo
2018-04-01
We investigate the effect of fluorine implanted in the polycrystalline silicon (poly-Si) gate and source/drain (S/D) region on negative bias temperature instability (NBTI) improvement. It is found that there is a trade-off implantation energy dependence of NBTI between fluorine in the poly-Si gate and that in the S/D region. Fluorine implanted in the poly-Si gate contributes to NBTI improvement under low energy implantation. On the other hand, NBTI is improved by fluorine implanted in the S/D region under high energy. We propose that the two-step implantation process with high and low energy is the optimum condition for NBTI improvement.
NASA Technical Reports Server (NTRS)
Mcknight, R. D.; Blalock, T. V.; Kennedy, E. J.
1974-01-01
The design, analysis, and experimental evaluation of an optimum performance torque current generator for use with strapdown gyroscopes, is presented. Among the criteria used to evaluate the design were the following: (1) steady-state accuracy; (2) margins of stability against self-oscillation; (3) temperature variations; (4) aging; (5) static errors drift errors, and transient errors, (6) classical frequency and time domain characteristics; and (7) the equivalent noise at the input of the comparater operational amplifier. The DC feedback loop of the torque current generator was approximated as a second-order system. Stability calculations for gain margins are discussed. Circuit diagrams are shown and block diagrams showing the implementation of the torque current generator are discussed.
NASA Astrophysics Data System (ADS)
Natarajan, S.; Pitchandi, K.; Mahalakshmi, N. V.
2018-02-01
The performance and emission characteristics of a PPCCI engine fuelled with ethanol and diesel blends were carried out on a single cylinder air cooled CI engine. In order to achieve the optimal process response with a limited number of experimental cycles, multi objective grey relational analysis had been applied for solving a multiple response optimization problem. Using grey relational grade and signal-to-noise ratio as a performance index, a combination of input parameters was prefigured so as to achieve optimum response characteristics. It was observed that 20% premixed ratio of blend was most suitable for use in a PPCCI engine without significantly affecting the engine performance and emissions characteristics.
Radio-frequency power-assisted performance improvement of a magnetohydrodynamic power generator
NASA Astrophysics Data System (ADS)
Murakami, Tomoyuki; Okuno, Yoshihiro; Yamasaki, Hiroyuki
2005-12-01
We describe a radio-frequency (rf) electromagnetic-field-assisted magnetohydrodynamic power generation experiment, where an inductively coupled rf field (13.56MHz, 5.2kW) is continuously supplied to the disk generator. The rf power assists the precise plasma ignition, by which the otherwise irregular plasma behavior was stabilized. The rf heating suppresses the ionization instability in the plasma behavior and homogenizes the nonuniformity of the plasma structures. The power-generating performance is significantly improved with the aid of the rf power under wide seeding conditions: insufficient, optimum, and excessive seed fractions. The increment of the enthalpy extraction ratio of around 2% is significantly greater than the fraction of the net rf power, that is, 0.16%, to the thermal input.
Modeling polyvinyl chloride Plasma Modification by Neural Networks
NASA Astrophysics Data System (ADS)
Wang, Changquan
2018-03-01
Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.