2014-12-12
AFRL-RV-PS- AFRL-RV-PS- TR-2015-0005 TR-2015-0005 ESTIMATE OF SOLAR MAXIMUM USING THE 1–8 Å GEOSTATIONARY OPERATIONAL ENVIRONMENTAL SATELLITES X... Geostationary Operational Environmental Satellites X-Ray Measurements (Postprint) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6...of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, L. M.; Balasubramaniam, K. S., E-mail: lwinter@aer.com
We present an alternate method of determining the progression of the solar cycle through an analysis of the solar X-ray background. Our results are based on the NOAA Geostationary Operational Environmental Satellites (GOES) X-ray data in the 1-8 Å band from 1986 to the present, covering solar cycles 22, 23, and 24. The X-ray background level tracks the progression of the solar cycle through its maximum and minimum. Using the X-ray data, we can therefore make estimates of the solar cycle progression and the date of solar maximum. Based upon our analysis, we conclude that the Sun reached its hemisphere-averagedmore » maximum in solar cycle 24 in late 2013. This is within six months of the NOAA prediction of a maximum in spring 2013.« less
SNAP 10A ESTIMATED ELECTRICAL CHARACTERISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, J.C.
1961-06-01
The electrical power characteristics of a SNAP 10A converter are estimated for given fractions of power degradation. Graphs are included showing the power characteristics for instantaneous transients from stabilized operation at the maximum efficiency point, and after system temperature stabilization at the operating point. Open-circuit emf's of the converter are estimated for instantaneous and temperature-stabilized cases. (D.L.C.)
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Trial Results of Ship Motions and Their Influence on Aircraft Operations for ISCS GUAM
1975-12-01
vide an estimate of the relative frequency and thus impcrtance of ship motions as a source of Harrier operation cancellations. It may be seen that of the...example. If wind speed is considered to be the only source of restrictions in aircraft operations, estimates of the maximum total number of operational days...poem’rieedad to c€a etiate (mneuver) for various components *A complete list of refrences is given on Page 104. 10 of ship motion Is directly related
NASA Astrophysics Data System (ADS)
Chen, B.; Su, J. H.; Guo, L.; Chen, J.
2017-06-01
This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G P; Logan, C M
We have estimated interference from external background radiation for a computed tomography (CT) scanner. Our intention is to estimate the interference that would be expected for the high-resolution SkyScan 1072 desk-top x-ray microtomography system. The SkyScan system uses a Microfocus x-ray source capable of a 10-{micro}m focal spot at a maximum current of 0.1 mA and a maximum energy of 130 kVp. All predictions made in this report assume using the x-ray source at the smallest spot size, maximum energy, and operating at the maximum current. Some of the systems basic geometry that is used for these estimates are: (1)more » Source-to-detector distance: 250 mm, (2) Minimum object-to-detector distance: 40 mm, and (3) Maximum object-to-detector distance: 230 mm. This is a first-order, rough estimate of the quantity of interference expected at the system detector caused by background radiation. The amount of interference is expressed by using the ratio of exposure expected at the detector of the CT system. The exposure values for the SkyScan system are determined by scaling the measured values of an x-ray source and the background radiation adjusting for the difference in source-to-detector distance and current. The x-ray source that was used for these measurements was not the SkyScan Microfocus x-ray tube. Measurements were made using an x-ray source that was operated at the same applied voltage but higher current for better statistics.« less
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
NASA Technical Reports Server (NTRS)
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimund, Kevin K.; McCutcheon, Jeffrey R.; Wilson, Aaron D.
A general method was developed for estimating the volumetric energy efficiency of pressure retarded osmosis via pressure-volume analysis of a membrane process. The resulting model requires only the osmotic pressure, π, and mass fraction, w, of water in the concentrated and dilute feed solutions to estimate the maximum achievable specific energy density, uu, as a function of operating pressure. The model is independent of any membrane or module properties. This method utilizes equilibrium analysis to specify the volumetric mixing fraction of concentrated and dilute solution as a function of operating pressure, and provides results for the total volumetric energy densitymore » of similar order to more complex models for the mixing of seawater and riverwater. Within the framework of this analysis, the total volumetric energy density is maximized, for an idealized case, when the operating pressure is π/(1+√w⁻¹), which is lower than the maximum power density operating pressure, Δπ/2, derived elsewhere, and is a function of the solute osmotic pressure at a given mass fraction. It was also found that a minimum 1.45 kmol of ideal solute is required to produce 1 kWh of energy while a system operating at “maximum power density operating pressure” requires at least 2.9 kmol. Utilizing this methodology, it is possible to examine the effects of volumetric solution cost, operation of a module at various pressure, and operation of a constant pressure module with various feed.« less
MoS2-based passively Q-switched diode-pumped Nd:YAG laser at 946 nm
NASA Astrophysics Data System (ADS)
Lin, Haifeng; Zhu, Wenzhang.; Xiong, Feibing; Cai, Lie
2017-06-01
We demonstrate a passively Q-switched Nd: YAG quasi-three-level laser operating at 946 nm using MoS2 as saturable absorber. A maximum average output power of 210 mW is achieved at an absorbed pump power of 6.67 W with a slope efficiency of about 5.8%. The shortest pulse width and maximum pulse repetition frequency are measured to be 280 ns and 609 kHz, respectively. The maximum pulse energy and maximum pulse peak power are therefore estimated to be about 0.35 μJ and 1.23 W, respectively. This work represents the first MoS2-based Q-switched laser operating at 0.9 μm spectral region.
ANN based Real-Time Estimation of Power Generation of Different PV Module Types
NASA Astrophysics Data System (ADS)
Syafaruddin; Karatepe, Engin; Hiyama, Takashi
Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.
DOT National Transportation Integrated Search
1977-02-01
The limitations of currently used estimation procedures in socio-economic modeling have been highlighted in the ongoing work of Senge, in which it is shown where more sophisticated estimation procedures may become necessary. One such advanced method ...
NASA Technical Reports Server (NTRS)
Grove, R. D.; Mayhew, S. C.
1973-01-01
A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.
Hydrostatic Bearing Pad Maximum Load and Overturning Conditions for the 70-meter Antenna
NASA Technical Reports Server (NTRS)
Mcginness, H. D.
1985-01-01
The reflector diameters of the 64-m antennas were increased to 70-m. In order to evaluate the minimum film thickness of the hydrostatic bearing which supports the antenna weight, it is first necessary to have a good estimation of the maximum operational load on the most heavily loaded bearing pad. The maximum hydrostatic bearing load is shown to be sufficiently small and the ratios of stabilizing to over turning moments are ample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.
This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrallymore » buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.« less
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
40 CFR 60.46c - Emission monitoring for sulfur dioxide.
Code of Federal Regulations, 2011 CFR
2011-07-01
... potential SO2 emission rate of the fuel combusted, and the span value of the SO2 CEMS at the outlet from the SO2 control device shall be 50 percent of the maximum estimated hourly potential SO2 emission rate of... estimated hourly potential SO2 emission rate of the fuel combusted. (d) As an alternative to operating a...
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles
NASA Astrophysics Data System (ADS)
Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.
2016-03-01
A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Relation of Fuel-Air Ratio to Engine Performance
NASA Technical Reports Server (NTRS)
Sparrow, Stanwood W
1925-01-01
The tests upon which this report is based were made at the Bureau of Standards between October 1919 and May 1923. From these it is concluded that: (1) with gasoline as a fuel, maximum power is obtained with fuel-air mixtures of from 0.07 to 0.08 pound of fuel per pound of air; (2) maximum power is obtained with approximately the same ratio over the range of air pressures and temperatures encountered in flight; (3) nearly minimum specific fuel consumption is secured by decreasing the fuel content of the charge until the power is 95 per cent of its maximum value. Presumably this information is of most direct value to the carburetor engineer. A carburetor should supply the engine with a suitable mixture. This report discusses what mixtures have been found suitable for various engines. It also furnishes the engine designer with a basis for estimating how much greater piston displacement an engine operating with a maximum economy mixture should have than one operating with a maximum power mixture in order for both to be capable of the same power development.
NASA Astrophysics Data System (ADS)
Watanabe, Takashi; Yoshida, Toshiya; Ohniwa, Katsumi
This paper discusses a new control strategy for photovoltaic power generation systems with consideration of dynamic characteristics of the photovoltaic cells. The controller estimates internal currents of an equivalent circuit for the cells. This estimated, or the virtual current and the actual voltage of the cells are fed to a conventional Maximum-Power-Point-Tracking (MPPT) controller. Consequently, this MPPT controller still tracks the optimum point even though it is so designed that the seeking speed of the operating point is extremely high. This system may suit for applications, which are installed in rapidly changeable insolation and temperature-conditions e.g. automobiles, trains, and airplanes. The proposed method is verified by experiment with a combination of this estimating function and the modified Boehringer's MPPT algorithm.
Browns Ferry-1 single-loop operation tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, J.; Wood, R.T.; Otaduy, P.J.
1985-09-01
This report documents the results of the stability tests performed on February 9, 1985, at the Browns Ferry Nuclear Power Plant Unit 1 under single-loop operating conditions. The observed increase in neutron noise during single-loop operation is solely due to an increase in flow noise. The Browns Ferry-1 reactor has been found to be stable in all modes of operation attained during the present tests. The most unstable test plateau corresponded to minimum recirculation pump speed in single-loop operation (test BFTP3). This operating condition had the minimum flow and maximum power-to-flow ratio. The estimated decay ratio in this plateau ismore » 0.53. The decay ratio decreased as the flow was increased during single-loop operation (down to 0.34 for test plateau BFTP6). This observation implies that the core-wide reactor stability follows the same trends in single-loop as it does in two-loop operation. Finally, no local or higher mode instabilities were found in the data taken from local power range monitors. The decay ratios estimated from the local power range monitors were not significantly different from those estimated from the average power range monitors.« less
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Measured energy savings and performance of power-managed personal computers and monitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, B.; Piette, M.A.; Kinney, K.
1996-08-01
Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less
Libya, Algeria and Egypt: crude oil potential from known deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietzman, W.D.; Rafidi, N.R.; Ross, T.A.
1982-04-01
An analysis is presented of the discovered crude oil resources, reserves, and estimated annual production from known fields of the Republics of Libya, Algeria, and Egypt. Proved reserves are defined as the remaining producible oil as of a specified date under operating practice in effect at that time and include estimated recoverable oil in undrilled portions of a given structure or structures. Also included in the proved reserve category are the estimated indicated additional volumes of recoverable oil from the entire oil reservoir where fluid injection programs have been started in a portion, or portions, of the reservoir. The indicatedmore » additional reserves (probable reserves) reported herein are the volumes of crude oil that might be obtained with the installation of secondary recovery or pressure maintenance operations in reservoirs where none have been previously installed. The sum of cumulative production, proved reserves, and probable reserves is defined as the ultimate oil recovery from known deposits; and resources are defined as the original oil in place (OOIP). An assessment was made of the availability of crude oil under three assumed sustained production rates for each country; an assessment was also made of each country's capability of sustaining production at, or near, the 1980 rates assuming different limiting reserve to production ratios. Also included is an estimate of the potential maximum producing capability from known deposits that might be obtained from known accumulations under certain assumptions, using a simple time series approach. The theoretical maximum oil production capability from known fields at any time is the maximum deliverability rate assuming there are no equipment, investment, market, or political constraints.« less
Estimating the maximum potential revenue for grid connected electricity storage :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participatingmore » in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.« less
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
An Extensive Unified Thermo-Electric Module Characterization Method
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-01-01
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575
An Extensive Unified Thermo-Electric Module Characterization Method.
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-12-13
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Map synchronization in optical communication systems
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.; Mohanty, N.
1973-01-01
The time synchronization problem in an optical communication system is approached as a problem of estimating the arrival time (delay variable) of a known transmitted field. Maximum aposteriori (MAP) estimation procedures are used to generate optimal estimators, with emphasis placed on their interpretation as a practical system device, Estimation variances are used to aid in the design of the transmitter signals for best synchronization. Extension is made to systems that perform separate acquisition and tracking operations during synchronization. The closely allied problem of maintaining timing during pulse position modulation is also considered. The results have obvious application to optical radar and ranging systems, as well as the time synchronization problem.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed
NASA Astrophysics Data System (ADS)
Kumar, V.; Sen, S.
2016-12-01
Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Plastic Muscles TM as lightweight, low voltage actuators and sensors
NASA Astrophysics Data System (ADS)
Bennett, Matthew; Leo, Donald; Duncan, Andrew
2008-03-01
Using proprietary technology, Discover Technologies has developed ionomeric polymer transducers that are capable of long-term operation in air. These "Plastic Muscle TM" transducers are useful as soft distributed actuators and sensors and have a wide range of applications in the aerospace, robotics, automotive, electronics, and biomedical industries. Discover Technologies is developing novel fabrication methods that allow the Plastic Muscles TM to be manufactured on a commercial scale. The Plastic Muscle TM transducers are capable of generating more than 0.5% bending strain at a peak strain rate of over 0.1 %/s with a 3 V input. Because the Plastic Muscles TM use an ionic liquid as a replacement solvent for water, they are able to operate in air for long periods of time. Also, the Plastic Muscles TM do not exhibit the characteristic "back relaxation" phenomenon that is common in water-swollen devices. The elastic modulus of the Plastic Muscle TM transducers is estimated to be 200 MPa and the maximum generated stress is estimated to be 1 MPa. Based on these values, the maximum blocked force at the tip of a 6 mm wide, 35 mm long actuator is estimated to be 19 mN. Modeling of the step response with an exponential series reveals nonlinearity in the transducers' behavior.
Wind Velocity and Position Sensor-less Operation for PMSG Wind Generator
NASA Astrophysics Data System (ADS)
Senjyu, Tomonobu; Tamaki, Satoshi; Urasaki, Naomitsu; Uezato, Katsumi; Funabashi, Toshihisa; Fujita, Hideki
Electric power generation using non-conventional sources is receiving considerable attention throughout the world. Wind energy is one of the available non-conventional energy sources. Electrical power generation using wind energy is possible in two ways, viz. constant speed operation and variable speed operation using power electronic converters. Variable speed power generation is attractive, because maximum electric power can be generated at all wind velocities. However, this system requires a rotor speed sensor, for vector control purpose, which increases the cost of the system. To alleviate the need of rotor speed sensor in vector control, we propose a new sensor-less control of PMSG (Permanent Magnet Synchronous Generator) based on the flux linkage. We can estimate the rotor position using the estimated flux linkage. We use a first-order lag compensator to obtain the flux linkage. Furthermore‚we estimate wind velocity and rotation speed using a observer. The effectiveness of the proposed method is demonstrated thorough simulation results.
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
Characterization of food waste-recycling wastewater as biogas feedstock.
Shin, Seung Gu; Han, Gyuseong; Lee, Joonyeob; Cho, Kyungjin; Jeon, Eun-Jeong; Lee, Changsoo; Hwang, Seokhwan
2015-11-01
A set of experiments was carried out to characterize food waste-recycling wastewater (FRW) and to investigate annual and seasonal variations in composition, which is related to the process operation in different seasons. Year-round samplings (n=31) showed that FRW contained high chemical oxygen demand (COD; 148.7±30.5g/L) with carbohydrate (15.6%), protein (19.9%), lipid (41.6%), ethanol (14.0%), and volatile fatty acids (VFAs; 4.2%) as major constituents. FRW was partly (62%) solubilized, possibly due to partial fermentation of organics including carbohydrate. Biodegradable portions of carbohydrate and protein were estimated from acidogenesis test by first-order kinetics: 72.9±4.6% and 37.7±0.3%, respectively. A maximum of 50% of the initial organics were converted to three major VFAs, which were acetate, propionate, and butyrate. The methane potential was estimated as 0.562L CH4/g VSfeed, accounting for 90.0% of the theoretical maximum estimated by elemental analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
1983-06-15
GREENHOUSE, DOG. 107 28 Runit Island radiological safety survey results following GREENHOUSE, DOG. 108 29 Estimate of maximum possible exposure at Parry...Enjebi Island radiological safety survey results following GREENHOUSE, EASY. 116 35 GREENHOUSE, EASY flight patterns. 118 36 Surface radex area and ship...positions during GREENHOUSE, GEORGE. 120 37 GREENHOUSE, GEORGE flight patterns. 122 38 Eleleron, Aomon, and Bijire island radiological safety survey
Blakely, Richard J.
1981-01-01
Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Determination of the performance of the Kaplan hydraulic turbines through simplified procedure
NASA Astrophysics Data System (ADS)
Pădureanu, I.; Jurcu, M.; Campian, C. V.; Haţiegan, C.
2018-01-01
A simplified procedure has been developed, compared to the complex one recommended by IEC 60041 (i.e. index samples), for measurement of the performance of the hydraulic turbines. The simplified procedure determines the minimum and maximum powers, the efficiency at maximum power, the evolution of powers by head and flow and to determine the correct relationship between runner/impeller blade angle and guide vane opening for most efficient operation of double-regulated machines. The simplified procedure can be used for a rapid and partial estimation of the performance of hydraulic turbines for repair and maintenance work.
Twenty-five years of maximum-entropy principle
NASA Astrophysics Data System (ADS)
Kapur, J. N.
1983-04-01
The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.
Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution
NASA Astrophysics Data System (ADS)
Wymeersch, Henk; Moeneclaey, Marc
2005-12-01
As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.
An estimation of global solar p-mode frequencies from IRIS network data: 1989-1996
NASA Astrophysics Data System (ADS)
Serebryanskiy, A.; Ehgamberdiev, Sh.; Kholikov, Sh.; Fossat, E.; Gelly, B.; Schmider, F. X.; Grec, G.; Cacciani, A.; Palle, P. L.; Lazrek, M.; Hoeksema, J. T.
2001-06-01
The IRIS network has accumulated full disk helioseismological data since July 1989, i.e. a complete 11-year solar cycle. Since the last paper publishing a frequency list [A&A 317 (1997) L71], not only has the network acquired new data, but has also developed new co-operative programs with compatible instruments [Abstr. SOHO 6/GONG 98 Workshop (1998) 51], so that merging IRIS files with these co-operative program data sets has made possible the improvement of the overall duty cycle. This paper presents new estimations of low degree p-mode frequencies obtained from this IRIS++ data bank covering the period 1989-1996, as well as the variation of their main parameters along the total range of magnetic activity, from before the last maximum to the very minimum. A preliminary estimation of the peak profile asymmetries is also included.
14 CFR 23.1527 - Maximum operating altitude.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established for...
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Cicone, R.; Crist, E.; Kauth, R. J.; Lambeck, P.; Malila, W. A.; Richardson, W.
1979-01-01
The author has identified the following significant results. An outgrowth of research and development activities in support of LACIE was a multicrop area estimation procedure, Procedure M. This procedure was a flexible, modular system that could be operated within the LACIE framework. Its distinctive features were refined preprocessing (including spatially varying correction for atmospheric haze), definition of field like spatial features for labeling, spectral stratification, and unbiased selection of samples to label and crop area estimation without conventional maximum likelihood classification.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Rock Cutting Depth Model Based on Kinetic Energy of Abrasive Waterjet
NASA Astrophysics Data System (ADS)
Oh, Tae-Min; Cho, Gye-Chun
2016-03-01
Abrasive waterjets are widely used in the fields of civil and mechanical engineering for cutting a great variety of hard materials including rocks, metals, and other materials. Cutting depth is an important index to estimate operating time and cost, but it is very difficult to predict because there are a number of influential variables (e.g., energy, geometry, material, and nozzle system parameters). In this study, the cutting depth is correlated to the maximum kinetic energy expressed in terms of energy (i.e., water pressure, water flow rate, abrasive feed rate, and traverse speed), geometry (i.e., standoff distance), material (i.e., α and β), and nozzle system parameters (i.e., nozzle size, shape, and jet diffusion level). The maximum kinetic energy cutting depth model is verified with experimental test data that are obtained using one type of hard granite specimen for various parameters. The results show a unique curve for a specific rock type in a power function between cutting depth and maximum kinetic energy. The cutting depth model developed here can be very useful for estimating the process time when cutting rock using an abrasive waterjet.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 3 2013-10-01 2013-10-01 false Maximum allowable operating pressure: Steel or... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 3 2011-10-01 2011-10-01 false Maximum allowable operating pressure: Steel or... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 3 2012-10-01 2012-10-01 false Maximum allowable operating pressure: Steel or... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 3 2014-10-01 2014-10-01 false Maximum allowable operating pressure: Steel or... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Blind operation of optical astronomical interferometers options and predicted performance
NASA Astrophysics Data System (ADS)
Beckers, Jacques M.
1991-01-01
Maximum sensitivity for optical interferometers is achieved only when the optical path lengths between the different arms can be equalized without using interference fringes on the research object itself. This is called 'blind operation' of the interferometer. This paper examines different options to achieve this, focusing on the application to the Very Large Telescope Interferometer (VLTI). It is proposed that blind operation should be done using a so-called coherence autoguider, working on an unresolved star of magnitude V = 11-13 within the isoplanatic patch for coherencing, which has a diameter of about 1 deg. Estimates of limiting magnitudes for the VLTI are also derived.
Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons.
Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J; Mitter, Sanjoy K
2014-10-01
We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.
On the maximum principle for complete second-order elliptic operators in general domains
NASA Astrophysics Data System (ADS)
Vitolo, Antonio
This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
14 CFR 25.1505 - Maximum operating limit speed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not be...
14 CFR 25.1505 - Maximum operating limit speed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not be...
Domestic embedded reporter program: saving lives and securing tactical operations
2017-03-01
estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the...13. ABSTRACT (maximum 200 words) Advances in technology have provided journalists the tools to obtain and share real- time information during domestic...terrorist and mass-shooting incidents. This real- time information-sharing compromises the safety of first responders, victims, and reporters. Real
Assessing operating characteristics of CAD algorithms in the absence of a gold standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.
2010-04-15
Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less
14 CFR 27.1527 - Maximum operating altitude.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 27.1527 Section 27.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... § 27.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as limited...
14 CFR 29.1527 - Maximum operating altitude.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 29.1527 Section 29.1527 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT... Limitations § 29.1527 Maximum operating altitude. The maximum altitude up to which operation is allowed, as...
49 CFR 195.406 - Maximum operating pressure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195...
49 CFR 195.406 - Maximum operating pressure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a... 49 Transportation 3 2012-10-01 2012-10-01 false Maximum operating pressure. 195.406 Section 195...
49 CFR 195.406 - Maximum operating pressure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a... 49 Transportation 3 2014-10-01 2014-10-01 false Maximum operating pressure. 195.406 Section 195...
49 CFR 195.406 - Maximum operating pressure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a... 49 Transportation 3 2011-10-01 2011-10-01 false Maximum operating pressure. 195.406 Section 195...
49 CFR 195.406 - Maximum operating pressure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a... 49 Transportation 3 2013-10-01 2013-10-01 false Maximum operating pressure. 195.406 Section 195...
Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm
NASA Astrophysics Data System (ADS)
Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi
2017-11-01
In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008
Thermal effects of dams in the Willamette River basin, Oregon
Rounds, Stewart A.
2010-01-01
Methods were developed to assess the effects of dams on streamflow and water temperature in the Willamette River and its major tributaries. These methods were used to estimate the flows and temperatures that would occur at 14 dam sites in the absence of upstream dams, and river models were applied to simulate downstream flows and temperatures under a no-dams scenario. The dams selected for this study include 13 dams built and operated by the U.S. Army Corps of Engineers (USACE) as part of the Willamette Project, and 1 dam on the Clackamas River owned and operated by Portland General Electric (PGE). Streamflows in the absence of upstream dams for 2001-02 were estimated for USACE sites on the basis of measured releases, changes in reservoir storage, a correction for evaporative losses, and an accounting of flow effects from upstream dams. For the PGE dam, no-project streamflows were derived from a previous modeling effort that was part of a dam-relicensing process. Without-dam streamflows were characterized by higher peak flows in winter and spring and much lower flows in late summer, as compared to with-dam measured flows. Without-dam water temperatures were estimated from measured temperatures upstream of the reservoirs (the USACE sites) or derived from no-project model results (the PGE site). When using upstream data to estimate without-dam temperatures at dam sites, a typical downstream warming rate based on historical data and downstream river models was applied over the distance from the measurement point to the dam site, but only for conditions when the temperature data indicated that warming might be expected. Regressions with measured temperatures from nearby or similar sites were used to extend the without-dam temperature estimates to the entire 2001-02 time period. Without-dam temperature estimates were characterized by a more natural seasonal pattern, with a maximum in July or August, in contrast to the measured patterns at many of the tall dam sites where the annual maximum temperature typically occurred in September or October. Without-dam temperatures also tended to have more daily variation than with-dam temperatures. Examination of the without-dam temperature estimates indicated that dam sites could be grouped according to the amount of streamflow derived from high-elevation, spring-fed, and snowmelt-driven areas high in the Cascade Mountains (Cougar, Big Cliff/Detroit, River Mill, and Hills Creek Dams: Group A), as opposed to flow primarily derived from lower-elevation rainfall-driven drainages (Group B). Annual maximum temperatures for Group A ranged from 15 to 20 degree(s)C, expressed as the 7-day average of the daily maximum (7dADM), whereas annual maximum 7dADM temperatures for Group B ranged from 21 to 25 degrees C. Because summertime stream temperature is at least somewhat dependent on the upstream water source, it was important when estimating without-dam temperatures to use correlations to sites with similar upstream characteristics. For that reason, it also is important to maintain long-term, year-round temperature measurement stations at representative sites in each of the Willamette River basin's physiographic regions. Streamflow and temperature estimates downstream of the major dam sites and throughout the Willamette River were generated using existing CE-QUAL-W2 flow and temperature models. These models, originally developed for the Willamette River water-temperature Total Maximum Daily Load process, required only a few modifications to allow them to run under the greatly reduced without-dam flow conditions. Model scenarios both with and without upstream dams were run. Results showed that Willamette River streamflow without upstream dams was reduced to levels much closer to historical pre-dam conditions, with annual minimum streamflows approximately one-half or less of dam-augmented levels. Thermal effects of the dams varied according to the time of year, from cooling in mid-summer to warm
Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems
NASA Technical Reports Server (NTRS)
Lustig, P. H.; Holms, A. G.; Davison, H. W.
1973-01-01
The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.
Mid-infrared InAs/AlGaSb superlattice quantum-cascade lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohtani, K.; Fujita, K.; Ohno, H.
2005-11-21
We report on the demonstration of mid-infrared InAs/AlGaSb superlattice quantum-cascade lasers operating at 10 {mu}m. The laser structures are grown on n-InAs (100) substrate by solid-source molecular-beam epitaxy. An InAs/AlGaSb chirped superlattice structure providing a large oscillator strength and fast carrier depopulation is employed as the active part. The observed minimum threshold current density at 80 K is 0.7 kA/cm{sup 2}, and the maximum operation temperature in pulse mode is 270 K. The waveguide loss of an InAs plasmon waveguide is estimated, and the factors that determine the operation temperature are discussed.
PREMIX: PRivacy-preserving EstiMation of Individual admiXture.
Chen, Feng; Dow, Michelle; Ding, Sijie; Lu, Yao; Jiang, Xiaoqian; Tang, Hua; Wang, Shuang
2016-01-01
In this paper we proposed a framework: PRivacy-preserving EstiMation of Individual admiXture (PREMIX) using Intel software guard extensions (SGX). SGX is a suite of software and hardware architectures to enable efficient and secure computation over confidential data. PREMIX enables multiple sites to securely collaborate on estimating individual admixture within a secure enclave inside Intel SGX. We implemented a feature selection module to identify most discriminative Single Nucleotide Polymorphism (SNP) based on informativeness and an Expectation Maximization (EM)-based Maximum Likelihood estimator to identify the individual admixture. Experimental results based on both simulation and 1000 genome data demonstrated the efficiency and accuracy of the proposed framework. PREMIX ensures a high level of security as all operations on sensitive genomic data are conducted within a secure enclave using SGX.
Kinetic study on the effect of temperature on biogas production using a lab scale batch reactor.
Deepanraj, B; Sivasubramanian, V; Jayaraj, S
2015-11-01
In the present study, biogas production from food waste through anaerobic digestion was carried out in a 2l laboratory-scale batch reactor operating at different temperatures with a hydraulic retention time of 30 days. The reactors were operated with a solid concentration of 7.5% of total solids and pH 7. The food wastes used in this experiment were subjected to characterization studies before and after digestion. Modified Gompertz model and Logistic model were used for kinetic study of biogas production. The kinetic parameters, biogas yield potential of the substrate (B), the maximum biogas production rate (Rb) and the duration of lag phase (λ), coefficient of determination (R(2)) and root mean square error (RMSE) were estimated in each case. The effect of temperature on biogas production was evaluated experimentally and compared with the results of kinetic study. The results demonstrated that the reactor with operating temperature of 50°C achieved maximum cumulative biogas production of 7556ml with better biodegradation efficiency. Copyright © 2015 Elsevier Inc. All rights reserved.
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-08-13
We estimate the capacity value of concentrating solar power (CSP) plants with thermal energy storage (TES) in the southwestern U.S. Our results show that incorporating TES in CSP plants significantly increases their capacity value. While CSP plants without TES have capacity values ranging between 60% and 86% of maximum capacity, plants with TES can have capacity values between 79% and 92%. Here, we demonstrate the effect of location and configuration on the operation and capacity value of CSP plants. Finally, we also show that using a capacity payment mechanism can increase the capacity value of CSP, since the capacity valuemore » of CSP is highly sensitive to operational decisions and energy prices are not a perfect indicator of scarcity of supply.« less
Reusable Reentry Satellite (RRS) system design study: System cost estimates document
NASA Technical Reports Server (NTRS)
1991-01-01
The Reusable Reentry Satellite (RRS) program was initiated to provide life science investigators relatively inexpensive, frequent access to space for extended periods of time with eventual satellite recovery on earth. The RRS will provide an on-orbit laboratory for research on biological and material processes, be launched from a number of expendable launch vehicles, and operate in Low-Altitude Earth Orbit (LEO) as a free-flying unmanned laboratory. SAIC's design will provide independent atmospheric reentry and soft landing in the continental U.S., orbit for a maximum of 60 days, and will sustain three flights per year for 10 years. The Reusable Reentry Vehicle (RRV) will be 3-axis stabilized with artificial gravity up to 1.5g's, be rugged and easily maintainable, and have a modular design to accommodate a satellite bus and separate modular payloads (e.g., rodent module, general biological module, ESA microgravity botany facility, general botany module). The purpose of this System Cost Estimate Document is to provide a Life Cycle Cost Estimate (LCCE) for a NASA RRS Program using SAIC's RRS design. The estimate includes development, procurement, and 10 years of operations and support (O&S) costs for NASA's RRS program. The estimate does not include costs for other agencies which may track or interface with the RRS program (e.g., Air Force tracking agencies or individual RRS experimenters involved with special payload modules (PM's)). The life cycle cost estimate extends over the 10 year operation and support period FY99-2008.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Li, DongXu; Liu, ZhiZhen; Liu, Liang
2013-03-01
The dexterous upper limb serves as the most important tool for astronauts to implement in-orbit experiments and operations. This study developed a simulated weightlessness experiment and invented new measuring equipment to quantitatively evaluate the muscle ability of the upper limb. Isometric maximum voluntary contractions (MVCs) and surface electromyography (sEMG) signals of right-handed pushing at the three positions were measured for eleven subjects. In order to enhance the comprehensiveness and accuracy of muscle force assessment, the study focused on signal processing techniques. We applied a combination method, which consists of time-, frequency-, and bi-frequency-domain analyses. Time- and frequency-domain analyses estimated the root mean square (RMS) and median frequency (MDF) of sEMG signals, respectively. Higher order spectra (HOS) of bi-frequency domain evaluated the maximum bispectrum amplitude ( B max), Gaussianity level (Sg) and linearity level (S l ) of sEMG signals. Results showed that B max, S l , and RMS values all increased as force increased. MDF and Sg values both declined as force increased. The research demonstrated that the combination method is superior to the conventional time- and frequency-domain analyses. The method not only described sEMG signal amplitude and power spectrum, but also deeper characterized phase coupling information and non-Gaussianity and non-linearity levels of sEMG, compared to two conventional analyses. The finding from the study can aid ergonomist to estimate astronaut muscle performance, so as to optimize in-orbit operation efficacy and minimize musculoskeletal injuries.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
NASA Astrophysics Data System (ADS)
Chen, Xi Lin; De Santis, Valerio; Esai Umenei, Aghuinyue
2014-07-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
NASA Astrophysics Data System (ADS)
Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John
2011-12-01
Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.
30 Brigade Combat Teams: Is the Army too Small
2016-12-01
are described in Table 3. Each entity travels through five servers. Deploy: This server represents the time it takes to deploy from home station...estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining...ABSTRACT (maximum 200 words) The purpose of this thesis is to determine the impact of a contingency operation on Army dwell time . The Department of
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
... Pretest 5 1 5 1.5 7.5 Interviews 135 1 135 1.5 202.5 Total 210 \\1\\ There are no capital costs or operating and maintenance costs associated with this collection of information. ERG will conduct a pretest of... complete the pretest, for a total of a maximum of 7.5 hours. We estimate that up to 135 respondents will...
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
An Eight-Month Sample of Marine Stratocumulus Cloud Fraction, Albedo, and Integrated Liquid Water.
NASA Astrophysics Data System (ADS)
Fairall, C. W.; Hare, J. E.; Snider, J. B.
1990-08-01
As part of the First International Satellite Cloud Climatology Regional Experiment (FIRE), a surface meteorology and shortwave/longwave irradiance station was operated in a marine stratocumulus regime on the northwest tip of San Nicolas island off the coast of Southern California. Measurements were taken from March through October 1987, including a FIRE Intensive Field Operation (IFO) held in July. Algorithms were developed to use the longwave irradiance data to estimate fractional cloudiness and to use the shortwave irradiance to estimate cloud albedo and integrated cloud liquid water content. Cloud base height is estimated from computations of the lifting condensation level. The algorithms are tested against direct measurements made during the IFO; a 30% adjustment was made to the liquid water parameterization. The algorithms are then applied to the entire database. The stratocumulus clouds over the island are found to have a cloud base height of about 400 m, an integrated liquid water content of 75 gm2, a fractional cloudiness of 0.95, and an albedo of 0.55. Integrated liquid water content rarely exceeds 350 g m2 and albedo rarely exceeds 0.90 for stratocumulus clouds. Over the summer months, the average cloud fraction shows a maximum at sunrise of 0.74 and a minimum at sunset of 0.41. Over the same period, the average cloud albedo shows a maximum of 0.61 at sunrise and a minimum of 0.31 a few hours after local noon (although the estimate is more uncertain because of the extreme solar zenith angle). The use of joint frequency distributions of fractional cloudiness with solar transmittance or cloud base height to classify cloud types appears to be useful.
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R
2012-11-01
The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India. Copyright © 2012 Elsevier B.V. All rights reserved.
Theoretical and experimental researches on the operating costs of a wastewater treatment plant
NASA Astrophysics Data System (ADS)
Panaitescu, M.; Panaitescu, F.-V.; Anton, I.-A.
2015-11-01
Purpose of the work: The total cost of a sewage plants is often determined by the present value method. All of the annual operating costs for each process are converted to the value of today's correspondence and added to the costs of investment for each process, which leads to getting the current net value. The operating costs of the sewage plants are subdivided, in general, in the premises of the investment and operating costs. The latter can be stable (normal operation and maintenance, the establishment of power) or variables (chemical and power sludge treatment and disposal, of effluent charges). For the purpose of evaluating the preliminary costs so that an installation can choose between different alternatives in an incipient phase of a project, can be used cost functions. In this paper will be calculated the operational cost to make several scenarios in order to optimize its. Total operational cost (fixed and variable) is dependent global parameters of wastewater treatment plant. Research and methodology: The wastewater treatment plant costs are subdivided in investment and operating costs. We can use different cost functions to estimate fixed and variable operating costs. In this study we have used the statistical formulas for cost functions. The method which was applied to study the impact of the influent characteristics on the costs is economic analysis. Optimization of plant design consist in firstly, to assess the ability of the smallest design to treat the maximum loading rates to a given effluent quality and, secondly, to compare the cost of the two alternatives for average and maximum loading rates. Results: In this paper we obtained the statistical values for the investment cost functions, operational fixed costs and operational variable costs for wastewater treatment plant and its graphical representations. All costs were compared to the net values. Finally we observe that it is more economical to build a larger plant, especially if maximum loading rates are reached. The actual target of operational management is to directly implement the presented cost functions in a software tool, in which the design of a plant and the simulation of its behaviour are evaluated simultaneously.
NASA Astrophysics Data System (ADS)
Chazarra, Manuel; Pérez-Díaz, Juan I.; García-González, Javier
2017-04-01
This paper analyses the economic viability of pumped-storage hydropower plants equipped with ternary units and considering hydraulic short-circuit operation. The analysed plant is assumed to participate in the day-ahead energy market and in the secondary regulation service of the Spanish power system. A deterministic day-ahead energy and reserve scheduling model is used to estimate the maximum theoretical income of the plant assuming perfect information of the next day prices and the residual demand curves of the secondary regulation reserve market. Results show that the pay-back periods with and without the hydraulic short-circuit operation are significantly lower than their expected lifetime and that the pay-back periods can be reduced with the inclusion of the hydraulic short-circuit operation.
Barreto, Savio G; Singh, Amanjeet; Perwaiz, Azhar; Singh, Tanveer; Singh, Manish Kumar; Chaudhary, Adarsh
2017-04-01
Unnecessary preoperative ordering of blood and blood products results in wastage of a valuable life-saving resource and poses a significant financial burden on healthcare systems. To determine patient-specific factors associated with intra-operative transfusions, and if intra-operative blood transfusions impact postoperative morbidity. Analysis of consecutive patients undergoing pancreatoduodenectomy (PD) for pancreatic tumors. A total of 384 patients underwent a classical PD with an estimated median blood loss of 200 cc and percentage transfused being 9.6%. Pre-existing hypertension, synchronous vascular resection, end-to-side pancreaticojejunostomy and nodal disease burden significantly associated with the need for intra-operative transfusions. Intra-operative blood transfusion not associated with postoperative morbidity. Optimization of MSBOS protocols for PD is required for more judicious use of blood products.
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
Code of Federal Regulations, 2010 CFR
2010-10-01
... plastic pipelines. 192.619 Section 192.619 Transportation Other Regulations Relating to Transportation... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...
Nitrogen nucleation in a cryogenic supersonic nozzle
NASA Astrophysics Data System (ADS)
Bhabhe, Ashutosh; Wyslouzil, Barbara
2011-12-01
We follow the vapor-liquid phase transition of N2 in a cryogenic supersonic nozzle apparatus using static pressure measurements. Under our operating conditions, condensation always occurs well below the triple point. Mean field kinetic nucleation theory (MKNT) does a better job of predicting the conditions corresponding to the estimated maximum nucleation rates, Jmax = 1017±1 cm-3 s-1, than two variants of classical nucleation theory. Combining the current results with the nucleation pulse chamber measurements of Iland et al. [J. Chem. Phys. 130, 114508-1 (2009)], we use nucleation theorems to estimate the critical cluster properties. Both the theories overestimate the size of the critical cluster, but MKNT does a good job of estimating the excess internal energy of the clusters.
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, E-E; Pan, Shu-Yuan; Yang, Liuhanzi
2015-09-15
Highlights: • Carbonation was performed using CO{sub 2}, wastewater and bottom ash in a slurry reactor. • A maximum capture capacity of 102 g CO{sub 2} per kg BA was achieved at mild conditions. • A maximum carbonation conversion of MSWI-BA was predicted to be 95% by RSM. • The CO{sub 2} emission from Bali incinerator could be expected to reduce by 6480 ton/y. • The process energy consumption per ton CO{sub 2} captured was estimated to be 180 kW h. - Abstract: Accelerated carbonation of alkaline wastes including municipal solid waste incinerator bottom ash (MSWI-BA) and the cold-rolling wastewatermore » (CRW) was investigated for carbon dioxide (CO{sub 2}) fixation under different operating conditions, i.e., reaction time, CO{sub 2} concentration, liquid-to-solid ratio, particle size, and CO{sub 2} flow rate. The MSWI-BA before and after carbonation process were analyzed by the thermogravimetry and differential scanning calorimetry, X-ray diffraction, and scanning electron microscopy equipped with energy dispersive X-ray spectroscopy. The MSWI-BA exhibits a high carbonation conversion of 90.7%, corresponding to a CO{sub 2} fixation capacity of 102 g per kg of ash. Meanwhile, the carbonation kinetics was evaluated by the shrinking core model. In addition, the effect of different operating parameters on carbonation conversion of MSWI-BA was statistically evaluated by response surface methodology (RSM) using experimental data to predict the maximum carbonation conversion. Furthermore, the amount of CO{sub 2} reduction and energy consumption for operating the proposed process in refuse incinerator were estimated. Capsule abstract: CO{sub 2} fixation process by alkaline wastes including bottom ash and cold-rolling wastewater was developed, which should be a viable method due to high conversion.« less
Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm
Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.
2010-01-01
A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155
Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R
2010-01-01
We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.
Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.
2011-01-01
We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, J.
A case is reported of a 39-yr-old dentist who realized that his dental x- ray machine had been on for about 90 min. During this time he was near the machine constantly, with his back usually toward the source of radiation. The estimated dose to the back of his head and upper torso was 180 r. The dentist saffered some anxiety, but no acute symptoms of radiation sickness. Physical examination gave negative results. There was no evidence of acute radiation damage. Study of temporal scalp hair revealed an estimated maximum dose received by the hair follicles of approximates 50 tomore » 75 r. The direct technical cause of the accident was a loose washer in the timer mechanism, making contact and completing the switching circuit, thereby causing the unit to go on. It is suggested that dentists and their assistants should wear radiation exposure badges at all times. In addition, equipment shouid be systcmatically and regularly checked so that maximum operating efficiency can be combined with minimum exposure. (P.C.H.)« less
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
A new event detector designed for the Seismic Research Observatories
Murdock, James N.; Hutt, Charles R.
1983-01-01
A new short-period event detector has been implemented on the Seismic Research Observatories. For each signal detected, a printed output gives estimates of the time of onset of the signal, direction of the first break, quality of onset, period and maximum amplitude of the signal, and an estimate of the variability of the background noise. On the SRO system, the new algorithm runs ~2.5x faster than the former (power level) detector. This increase in speed is due to the design of the algorithm: all operations can be performed by simple shifts, additions, and comparisons (floating point operations are not required). Even though a narrow-band recursive filter is not used, the algorithm appears to detect events competitively with those algorithms that employ such filters. Tests at Albuquerque Seismological Laboratory on data supplied by Blandford suggest performance commensurate with the on-line detector of the Seismic Data Analysis Center, Alexandria, Virginia.
Minimum maximum temperature gradient coil design.
While, Peter T; Poole, Michael S; Forbes, Larry K; Crozier, Stuart
2013-08-01
Ohmic heating is a serious problem in gradient coil operation. A method is presented for redesigning cylindrical gradient coils to operate at minimum peak temperature, while maintaining field homogeneity and coil performance. To generate these minimaxT coil windings, an existing analytic method for simulating the spatial temperature distribution of single layer gradient coils is combined with a minimax optimization routine based on sequential quadratic programming. Simulations are provided for symmetric and asymmetric gradient coils that show considerable improvements in reducing maximum temperature over existing methods. The winding patterns of the minimaxT coils were found to be heavily dependent on the assumed thermal material properties and generally display an interesting "fish-eye" spreading of windings in the dense regions of the coil. Small prototype coils were constructed and tested for experimental validation and these demonstrate that with a reasonable estimate of material properties, thermal performance can be improved considerably with negligible change to the field error or standard figures of merit. © 2012 Wiley Periodicals, Inc.
Estimating Tropical Cyclone Surface Wind Field Parameters with the CYGNSS Constellation
NASA Astrophysics Data System (ADS)
Morris, M.; Ruf, C. S.
2016-12-01
A variety of parameters can be used to describe the wind field of a tropical cyclone (TC). Of particular interest to the TC forecasting and research community are the maximum sustained wind speed (VMAX), radius of maximum wind (RMW), 34-, 50-, and 64-kt wind radii, and integrated kinetic energy (IKE). The RMW is the distance separating the storm center and the VMAX position. IKE integrates the square of surface wind speed over the entire storm. These wind field parameters can be estimated from observations made by the Cyclone Global Navigation Satellite System (CYGNSS) constellation. The CYGNSS constellation consists of eight small satellites in a 35-degree inclination circular orbit. These satellites will be operating in standard science mode by the 2017 Atlantic TC season. CYGNSS will provide estimates of ocean surface wind speed under all precipitating conditions with high temporal and spatial sampling in the tropics. TC wind field data products can be derived from the level-2 CYGNSS wind speed product. CYGNSS-based TC wind field science data products are developed and tested in this paper. Performance of these products is validated using a mission simulator prelaunch.
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Development of GP and GEP models to estimate an environmental issue induced by blasting operation.
Faradonbeh, Roohollah Shirani; Hasanipanah, Mahdi; Amnieh, Hassan Bakhshandeh; Armaghani, Danial Jahed; Monjezi, Masoud
2018-05-21
Air overpressure (AOp) is one of the most adverse effects induced by blasting in the surface mines and civil projects. So, proper evaluation and estimation of the AOp is important for minimizing the environmental problems resulting from blasting. The main aim of this study is to estimate AOp produced by blasting operation in Miduk copper mine, Iran, developing two artificial intelligence models, i.e., genetic programming (GP) and gene expression programming (GEP). Then, the accuracy of the GP and GEP models has been compared to multiple linear regression (MLR) and three empirical models. For this purpose, 92 blasting events were investigated, and subsequently, the AOp values were carefully measured. Moreover, in each operation, the values of maximum charge per delay and distance from blast points, as two effective parameters on the AOp, were measured. After predicting by the predictive models, their performance prediction was checked in terms of variance account for (VAF), coefficient of determination (CoD), and root mean square error (RMSE). Finally, it was found that the GEP with VAF of 94.12%, CoD of 0.941, and RMSE of 0.06 is a more precise model than other predictive models for the AOp prediction in the Miduk copper mine, and it can be introduced as a new powerful tool for estimating the AOp resulting from blasting.
Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.
Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A
2008-06-01
The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Moore, Alan; Evetts, Simon; Feiveson, Alan; Lee, Stuart; McCleary, Frank; Platts, Steven
2009-01-01
NASA's Human Research Program Integrated Research Plan (HRP-47065) serves as a road-map identifying critically needed information for future space flight operations (Lunar, Martian). VO2max (often termed aerobic capacity) reflects the maximum rate at which oxygen can be taken up and utilized by the body during exercise. Lack of in-flight and immediate postflight VO2max measurements was one area identified as a concern. The risk associated with not knowing this information is: Unnecessary Operational Limitations due to Inaccurate Assessment of Cardiovascular Performance (HRP-47065).
Efficient Red-Emitting Platinum Complex with Long Operational Stability.
Fleetham, Tyler; Li, Guijie; Li, Jian
2015-08-05
A tetradentate cyclometalated Pt(II) complex, PtN3N-ptb, was developed as an emissive dopant for stable and efficient red phosphorescent OLEDs. Devices employing PtN3N-ptb in electrochemically stable device architectures achieved long operational lifetimes with estimated LT97, of over 600 h at luminances of 1000 cd/m(2). Such long operational lifetimes were achieved utilizing only literature reported host, transporting and blocking materials with known molecular structures. Additionally, a thorough study of the effects of various host and transport materials on the efficiency, turn on voltage, and stability of the devices was carried out. Ultimately, maximum forward viewing EQEs as high as 21.5% were achieved, demonstrating that Pt(II) complexes can act as stable and efficient dopants with operational lifetimes comparable or superior to those of the best literature-reported Ir(III) complexes.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Closed Form Equations for the Preliminary Design of a Heat-Pipe-Cooled Leading Edge
NASA Technical Reports Server (NTRS)
Glass, David E.
1998-01-01
A set of closed form equations for the preliminary evaluation and design of a heat-pipe-cooled leading edge is presented. The set of equations can provide a leading-edge designer with a quick evaluation of the feasibility of using heat-pipe cooling. The heat pipes can be embedded in a metallic or composite structure. The maximum heat flux, total integrated heat load, and thermal properties of the structure and heat-pipe container are required input. The heat-pipe operating temperature, maximum surface temperature, heat-pipe length, and heat pipe-spacing can be estimated. Results using the design equations compared well with those from a 3-D finite element analysis for both a large and small radius leading edge.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Jain, Suyog N; Gogate, Parag R
2018-03-15
Biosorbent synthesized from dead leaves of Prunus Dulcis with chemical activation during the synthesis was applied for the removal of Acid Green 25 dye from wastewater. The obtained biosorbent was characterized using Brunauer-Emmett-Teller analysis, Fourier transform-infrared spectroscopy and scanning electron microscopy measurements. It was demonstrated that alkali treatment during the synthesis significantly increased surface area of biosorbent from 67.205 to 426.346 m 2 /g. The effect of various operating parameters on dye removal was investigated in batch operation and optimum values of parameters were established as pH of 2, 14 g/L as the dose of natural biosorbent and 6 g/L as the dose of alkali treated biosorbent. Relative error values were determined to check fitting of obtained data to the different kinetic and isotherm models. It was established that pseudo-second order kinetic model and Langmuir isotherm fitted suitably to the obtained batch experimental data. Maximum biosorption capacity values were estimated as 22.68 and 50.79 mg/g for natural biosorbent and for alkali activated Prunus Dulcis, respectively. Adsorption was observed as endothermic and activation energy of 6.22 kJ/mol confirmed physical type of adsorption. Column experiments were also conducted to probe the effectiveness of biosorbent for practical applications in continuous operation. Breakthrough parameters were established by studying the effect of biosorbent height, flow rate of dye solution and initial dye concentration on the extent of dye removal. The maximum biosorption capacity under optimized conditions in the column operation was estimated as 28.57 mg/g. Thomas and Yoon-Nelson models were found to be suitably fitted to obtained column data. Reusability study carried out in batch and continuous column operations confirmed that synthesized biosorbent can be used repeatedly for dye removal from wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
49 CFR 174.86 - Maximum allowable operating speed.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15 mph...
49 CFR 174.86 - Maximum allowable operating speed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15 mph...
Arjunan, Sridhar P; Kumar, Dinesh K; Jung, Tzyy-Ping
2010-01-01
Changes in alertness levels can have dire consequences for people operating and controlling motorized equipment. Past research studies have shown the relationship of Electroencephalogram (EEG) with alertness of the person. This research reports the fractal analysis of EEG and estimation of the alertness levels of the individual based on the changes in the maximum fractal length (MFL) of EEG. The results indicate that MFL of only 2 channels of EEG can be used to identify the loss of alertness of the individual with mean (inverse) correlation coefficient = 0.82. This study has also reported that using the changes in MFL of EEG, the changes in alertness level of a person was estimated with a mean correlation coefficient = 0.69.
NASA Technical Reports Server (NTRS)
Shapiro, Jeffrey H.
1992-01-01
Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.
Evaluation of the PV energy production after 12-years of operating
NASA Astrophysics Data System (ADS)
Bouchakour, Salim; Arab, Amar Hadj; Abdeladim, Kamel; Boulahchiche, Saliha; Amrouche, Said Ould; Razagui, Abdelhak
2018-05-01
This paper presents a simple way to approximately evaluate the photovoltaic (PV) array performance degradation, the studied PV arrays are connected to the local electric grid at the Centre de Developpement des Energies Renouvelables (CDER) in Algiers, Algeria, since June 2004. The used PV module model takes in consideration the module temperature and the effective solar radiance, the electrical characteristics provided by the manufacturer data sheet and the evaluation of the performance coefficient. For the dynamic behavior we use the Linear Reoriented Coordinates Method (LRCM) to estimate the maximum power point (MPP). The performance coefficient is evaluated on the one hand under STC conditions to estimate the dc energy according to the manufacturer data. On the other hand, under real conditions using both the monitored data and the LM optimization algorithm, allowing a good degree of accuracy of estimated dc energy. The application of the developed modeling procedure to the analysis of the monitored data is expected to improve understanding and assessment of the PV performance degradation of the PV arrays after 12 years of operation.
Milewski, Mikolaj; Stinchcomb, Audra L.
2012-01-01
An ability to estimate the maximum flux of a xenobiotic across skin is desirable both from the perspective of drug delivery and toxicology. While there is an abundance of mathematical models describing the estimation of drug permeability coefficients, there are relatively few that focus on the maximum flux. This article reports and evaluates a simple and easy-to-use predictive model for the estimation of maximum transdermal flux of xenobiotics based on three common molecular descriptors: logarithm of octanol-water partition coefficient, molecular weight and melting point. The use of all three can be justified on the theoretical basis of their influence on the solute aqueous solubility and the partitioning into the stratum corneum lipid domain. The model explains 81% of the variability in the permeation dataset comprised of 208 entries and can be used to obtain a quick estimate of maximum transdermal flux when experimental data is not readily available. PMID:22702370
NASA Astrophysics Data System (ADS)
Gilani, H., Sr.; Ganguly, S.; Zhang, G.; Koju, U. A.; Murthy, M. S. R.; Nemani, R. R.; Manandhar, U.; Thapa, G. J.
2015-12-01
Nepal is a landlocked country with 39% forest cover of the total land area (147,181 km2). Under the Forest Carbon Partnership Facility (FCPF) and implemented by the World Bank (WB), Nepal chosen as one of four countries best suitable for results-based payment system for Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+) scheme. At the national level Landsat based, from 1990 to 2000 the forest area has declined by 2%, i.e. by 1467 km2, whereas from 2000 to 2010 it has declined only by 0.12% i.e. 176 km2. A cost effective monitoring and evaluation system for REDD+ requires a balanced approach of remote sensing and ground measurements. This paper provides, for Nepal a cost effective and operational 30 m Above Ground Biomass (AGB) estimation and mapping methodology using freely available satellite data integrated with field inventory. Leaf Area Index (LAI) generated based on propose methodology by Ganguly et al. (2012) using Landsat-8 the OLI cloud free images. To generate tree canopy height map, a density scatter graph between the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud, and Land Elevation Satellite (ICESat) estimated maximum height and Landsat LAI nearest to the center coordinates of the GLAS shots show a moderate but significant exponential correlation (31.211*LAI0.4593, R2= 0.33, RMSE=13.25 m). From the field well distributed circular (750m2 and 500m2), 1124 field plots (0.001% representation of forest cover) measured which were used for estimation AGB (ton/ha) using Sharma et al. (1990) proposed equations for all tree species of Nepal. A satisfactory linear relationship (AGB = 8.7018*Hmax-101.24, R2=0.67, RMSE=7.2 ton/ha) achieved between maximum canopy height (Hmax) and AGB (ton/ha). This cost effective and operational methodology is replicable, over 5-10 years with minimum ground samples through integration of satellite images. Developed AGB used to produce optimum fuel wood scenarios using population and road accessibility datasets.
1990-05-16
Redondo Beach. CA. Civilian subcontractors are ITEK (cameras). Lexington. MA; Contraves Georz (telescopes), Pittsburgh. PA; and Kentron (operations and...Improvements include a higher maximum takeoff weight , improved air-to-air gun sight algorithms, digital flight controls, and improved pilot interface...ambient propagation loss , significant penetration of sea water, and good performance in a nuclear environment. C. (U) JUSTIFICATION FOR PROJECTS LESS
NASA Astrophysics Data System (ADS)
Salazar, Fernando; San-Mauro, Javier; Celigueta, Miguel Ángel; Oñate, Eugenio
2017-07-01
Dam bottom outlets play a vital role in dam operation and safety, as they allow controlling the water surface elevation below the spillway level. For partial openings, water flows under the gate lip at high velocity and drags the air downstream of the gate, which may cause damages due to cavitation and vibration. The convenience of installing air vents in dam bottom outlets is well known by practitioners. The design of this element depends basically on the maximum air flow through the air vent, which in turn is a function of the specific geometry and the boundary conditions. The intrinsic features of this phenomenon makes it hard to analyse either on site or in full scaled experimental facilities. As a consequence, empirical formulas are frequently employed, which offer a conservative estimate of the maximum air flow. In this work, the particle finite element method was used to model the air-water interaction in Susqueda Dam bottom outlet, with different gate openings. Specific enhancements of the formulation were developed to consider air-water interaction. The results were analysed as compared to the conventional design criteria and to information gathered on site during the gate operation tests. This analysis suggests that numerical modelling with the PFEM can be helpful for the design of this kind of hydraulic works.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
NASA Astrophysics Data System (ADS)
Albano, R.; Sole, A.; Adamowski, J.; Mancusi, L.
2014-11-01
Efficient decision-making regarding flood risk reduction has become a priority for authorities and stakeholders in many European countries. Risk analysis methods and techniques are a useful tool for evaluating costs and benefits of possible interventions. Within this context, a methodology to estimate flood consequences was developed in this paper that is based on GIS, and integrated with a model that estimates the degree of accessibility and operability of strategic emergency response structures in an urban area. The majority of the currently available approaches do not properly analyse road network connections and dependencies within systems, and as such a loss of roads could cause significant damages and problems to emergency services in cases of flooding. The proposed model is unique in that it provides a maximum-impact estimation of flood consequences on the basis of the operability of the strategic emergency structures in an urban area, their accessibility, and connection within the urban system of a city (i.e. connection between aid centres and buildings at risk), in the emergency phase. The results of a case study in the Puglia region in southern Italy are described to illustrate the practical applications of this newly proposed approach. The main advantage of the proposed approach is that it allows for defining a hierarchy between different infrastructure in the urban area through the identification of particular components whose operation and efficiency are critical for emergency management. This information can be used by decision-makers to prioritize risk reduction interventions in flood emergencies in urban areas, given limited financial resources.
Jung, Kwang-Wook; Yoon, Choon-G; Jang, Jae-Ho; Kong, Dong-Soo
2008-01-01
Effective watershed management often demands qualitative and quantitative predictions of the effect of future management activities as arguments for policy makers and administration. The BASINS geographic information system was developed to compute total maximum daily loads, which are helpful to establish hydrological process and water quality modeling system. In this paper the BASINS toolkit HSPF model is applied in 20,271 km(2) large watershed of the Han River Basin is used for applicability of HSPF and BMPs scenarios. For proper evaluation of watershed and stream water quality, comprehensive estimation methods are necessary to assess large amounts of point source and nonpoint-source (NPS) pollution based on the total watershed area. In this study, The Hydrological Simulation Program-FORTRAN (HSPF) was estimated to simulate watershed pollutant loads containing dam operation and applied BMPs scenarios for control NPS pollution. The 8-day monitoring data (about three years) were used in the calibration and verification processes. Model performance was in the range of "very good" and "good" based on percent difference. The water-quality simulation results were encouraging for this large sizable watershed with dam operation practice and mixed land uses; HSPF proved adequate, and its application is recommended to simulate watershed processes and BMPs evaluation. IWA Publishing 2008.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
"SPURS" in the North Atlantic Salinity Maximum
NASA Astrophysics Data System (ADS)
Schmitt, Raymond
2014-05-01
The North Atlantic Salinity Maximum is the world's saltiest open ocean salinity maximum and was the focus of the recent Salinity Processes Upper-ocean Regional Study (SPURS) program. SPURS was a joint venture between US, French, Irish, and Spanish investigators. Three US and two EU cruises were involved from August, 1012 - October, 2013 as well as surface moorings, glider, drifter and float deployments. Shipboard operations included underway meteorological and oceanic data, hydrographic surveys and turbulence profiling. The goal is to improve our understanding of how the salinity maximum is maintained and how it may be changing. It is formed by an excess of evaporation over precipitation and the wind-driven convergence of the subtropical gyre. Such salty areas are getting saltier with global warming (a record high SSS was observed in SPURS) and it is imperative to determine the relative roles of surface water fluxes and oceanic processes in such trends. The combination of accurate surface flux estimates with new assessments of vertical and horizontal mixing in the ocean will help elucidate the utility of ocean salinity in quantifying the changing global water cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. L. Abbott; K. N. Keck; R. E. Schindler
This screening level risk assessment evaluates potential adverse human health and ecological impacts resulting from continued operations of the calciner at the New Waste Calcining Facility (NWCF) at the Idaho Nuclear Technology and Engineering Center (INTEC), Idaho National Engineering and Environmental Laboratory (INEEL). The assessment was conducted in accordance with the Environmental Protection Agency (EPA) report, Guidance for Performing Screening Level Risk Analyses at Combustion Facilities Burning Hazardous Waste. This screening guidance is intended to give a conservative estimate of the potential risks to determine whether a more refined assessment is warranted. The NWCF uses a fluidized-bed combustor to solidifymore » (calcine) liquid radioactive mixed waste from the INTEC Tank Farm facility. Calciner off volatilized metal species, trace organic compounds, and low-levels of radionuclides. Conservative stack emission rates were calculated based on maximum waste solution feed samples, conservative assumptions for off gas partitioning of metals and organics, stack gas sampling for mercury, and conservative measurements of contaminant removal (decontamination factors) in the off gas treatment system. Stack emissions were modeled using the ISC3 air dispersion model to predict maximum particulate and vapor air concentrations and ground deposition rates. Results demonstrate that NWCF emissions calculated from best-available process knowledge would result in maximum onsite and offsite health and ecological impacts that are less then EPA-established criteria for operation of a combustion facility.« less
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
An evaluation of percentile and maximum likelihood estimators of weibull paremeters
Stanley J. Zarnoch; Tommy R. Dell
1985-01-01
Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Method and apparatus for in-situ detection and isolation of aircraft engine faults
NASA Technical Reports Server (NTRS)
Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)
2007-01-01
A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.
Construction Theory and Noise Analysis Method of Global CGCS2000 Coordinate Frame
NASA Astrophysics Data System (ADS)
Jiang, Z.; Wang, F.; Bai, J.; Li, Z.
2018-04-01
The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system) network and the mainland GNSS (Global Navigation Satellite System) network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.
Browns Ferry Nuclear Plant radiological impact assessment report, January-June 1988
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B.E.
1988-01-01
Potential doses to maximum individuals and the population around Browns Ferry are calcuated for each quarter. Measured plant releases for the reporting period are used to estimate these doses. Dispersion of radioactive effluents in the environment is estimated in accordance with the guidance provided and measuring during the period. Using dose calculation methodologies which are described in detail in the Browns Ferry Offsite Dose Calculation Manual, the doses are calculated and used to determine compliance with the dose limits contained in Browns Ferry's Operating License. In this report, the doses resulting from releases are described and compared to quarterly andmore » annual limits established for Browns Ferry.« less
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
Analysis of operational comfort in manual tasks using human force manipulability measure.
Tanaka, Yoshiyuki; Nishikawa, Kazuo; Yamada, Naoki; Tsuji, Toshio
2015-01-01
This paper proposes a scheme for human force manipulability (HFM) based on the use of isometric joint torque properties to simulate the spatial characteristics of human operation forces at an end-point of a limb with feasible magnitudes for a specified limb posture. This is also applied to the evaluation/prediction of operational comfort (OC) when manually operating a human-machine interface. The effectiveness of HFM is investigated through two experiments and computer simulations of humans generating forces by using their upper extremities. Operation force generation with maximum isometric effort can be roughly estimated with an HFM measure computed from information on the arm posture during a maintained posture. The layout of a human-machine interface is then discussed based on the results of operational experiments using an electric gear-shifting system originally developed for robotic devices. The results indicate a strong relationship between the spatial characteristics of the HFM and OC levels when shifting, and the OC is predicted by using a multiple regression model with HFM measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-04-01
The equivalent dose rate to populations potentially exposed to wastes shipped to Rollins Environmental Services, Baton Rouge, LA from Oak Ridge and Savannah River Operations of the Department of Energy was estimated. Where definitive information necessary to the estimation of a dose rate was unavailable, bounding assumptions were employed to ensure an overestimate of the actual dose rate experienced by the potentially exposed population. On this basis, it was estimated that a total of about 3.85 million pounds of waste was shipped from these DOE operations to Rollins with a maximum combined total activity of about 0.048 Curies. Populations nearmore » the Rollins site could potentially be exposed to the radionuclides in the DOE wastes via the air pathway after incineration of the DOE wastes or by migration from the soil after landfill disposal. AIRDOS was used to estimate the dose rate after incineration. RESRAD was used to estimate the dose rate after landfill disposal. Calculations were conducted with the estimated radioactive specie distribution in the wastes and, as a test of the sensitivity of the results to the estimated distribution, with the entire activity associated with individual radioactive species such as Cs-137, Ba-137, Sr-90, Co-60, U-234, U-235 and U-238. With a given total activity, the dose rates to nearby individuals were dominated by the uranium species.« less
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Estimating traffic volumes for signalized intersections using connected vehicle data
Zheng, Jianfeng; Liu, Henry X.
2017-04-17
Recently connected vehicle (CV) technology has received significant attention thanks to active pilot deployments supported by the US Department of Transportation (USDOT). At signalized intersections, CVs may serve as mobile sensors, providing opportunities of reducing dependencies on conventional vehicle detectors for signal operation. However, most of the existing studies mainly focus on scenarios that penetration rates of CVs reach certain level, e.g., 25%, which may not be feasible in the near future. How to utilize data from a small number of CVs to improve traffic signal operation remains an open question. In this work, we develop an approach to estimatemore » traffic volume, a key input to many signal optimization algorithms, using GPS trajectory data from CV or navigation devices under low market penetration rates. To estimate traffic volumes, we model in this paper vehicle arrivals at signalized intersections as a time-dependent Poisson process, which can account for signal coordination. The estimation problem is formulated as a maximum likelihood problem given multiple observed trajectories from CVs approaching to the intersection. An expectation maximization (EM) procedure is derived to solve the estimation problem. Two case studies were conducted to validate our estimation algorithm. One uses the CV data from the Safety Pilot Model Deployment (SPMD) project, in which around 2800 CVs were deployed in the City of Ann Arbor, MI. The other uses vehicle trajectory data from users of a commercial navigation service in China. Mean absolute percentage error (MAPE) of the estimation is found to be 9–12%, based on benchmark data manually collected and data from loop detectors. Finally, considering the existing scale of CV deployments, the proposed approach could be of significant help to traffic management agencies for evaluating and operating traffic signals, paving the way of using CVs for detector-free signal operation in the future.« less
Estimating traffic volumes for signalized intersections using connected vehicle data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Jianfeng; Liu, Henry X.
Recently connected vehicle (CV) technology has received significant attention thanks to active pilot deployments supported by the US Department of Transportation (USDOT). At signalized intersections, CVs may serve as mobile sensors, providing opportunities of reducing dependencies on conventional vehicle detectors for signal operation. However, most of the existing studies mainly focus on scenarios that penetration rates of CVs reach certain level, e.g., 25%, which may not be feasible in the near future. How to utilize data from a small number of CVs to improve traffic signal operation remains an open question. In this work, we develop an approach to estimatemore » traffic volume, a key input to many signal optimization algorithms, using GPS trajectory data from CV or navigation devices under low market penetration rates. To estimate traffic volumes, we model in this paper vehicle arrivals at signalized intersections as a time-dependent Poisson process, which can account for signal coordination. The estimation problem is formulated as a maximum likelihood problem given multiple observed trajectories from CVs approaching to the intersection. An expectation maximization (EM) procedure is derived to solve the estimation problem. Two case studies were conducted to validate our estimation algorithm. One uses the CV data from the Safety Pilot Model Deployment (SPMD) project, in which around 2800 CVs were deployed in the City of Ann Arbor, MI. The other uses vehicle trajectory data from users of a commercial navigation service in China. Mean absolute percentage error (MAPE) of the estimation is found to be 9–12%, based on benchmark data manually collected and data from loop detectors. Finally, considering the existing scale of CV deployments, the proposed approach could be of significant help to traffic management agencies for evaluating and operating traffic signals, paving the way of using CVs for detector-free signal operation in the future.« less
Demodulation of messages received with low signal to noise ratio
NASA Astrophysics Data System (ADS)
Marguinaud, A.; Quignon, T.; Romann, B.
The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.
Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data
2007-04-01
10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2
Effects of Special Use Airspace on Economic Benefits of Direct Flights
NASA Technical Reports Server (NTRS)
Datta, Koushik; Barrington, Craig; Foster, John D. (Technical Monitor)
1996-01-01
A methodology for estimating the economic effects of Special Use Airspace (SUA) on direct route flights is presented in this paper. The methodology is based on evaluating operating costs of aircraft and analyzing the different ground-track distances traveled by flights under different air traffic scenarios. Using this methodology the following objectives are evaluated: optimistic bias of studies that assume accessible SUAs the maximum economic benefit of dynamic use of SUAs and the marginal economic benefit of the dynamic use of individual SUAs.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
Development of magnitude scaling relationship for earthquake early warning system in South Korea
NASA Astrophysics Data System (ADS)
Sheen, D.
2011-12-01
Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
NASA Astrophysics Data System (ADS)
Farmann, Alexander; Waag, Wladislaw; Marongiu, Andrea; Sauer, Dirk Uwe
2015-05-01
This work provides an overview of available methods and algorithms for on-board capacity estimation of lithium-ion batteries. An accurate state estimation for battery management systems in electric vehicles and hybrid electric vehicles is becoming more essential due to the increasing attention paid to safety and lifetime issues. Different approaches for the estimation of State-of-Charge, State-of-Health and State-of-Function are discussed and analyzed by many authors and researchers in the past. On-board estimation of capacity in large lithium-ion battery packs is definitely one of the most crucial challenges of battery monitoring in the aforementioned vehicles. This is mostly due to high dynamic operation and conditions far from those used in laboratory environments as well as the large variation in aging behavior of each cell in the battery pack. Accurate capacity estimation allows an accurate driving range prediction and accurate calculation of a battery's maximum energy storage capability in a vehicle. At the same time it acts as an indicator for battery State-of-Health and Remaining Useful Lifetime estimation.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kascheev, Vladimir; Poluektov, Pavel; Ustinov, Oleg
The problems of spent reactor graphite are being shown, the options of its disposal is considered. Burning method is selected as the most efficient and waste-free. It is made a comparison of amounts of {sup 14}C that entering the environment in a natural way during the operation of nuclear power plants (NPPs) and as a result of the proposed burning of spent reactor graphite. It is shown the possibility of burning graphite with the arrival of {sup 14}C into the atmosphere within the maximum allowable emissions. This paper analyzes the different ways of spent reactor graphite treatment. It is shownmore » the possibility of its reprocessing by burning method in the air flow. It is estimated the effect of this technology to the overall radiation environment and compared its contribution to the general background radiation due to cosmic radiation and NPPs emission. It is estimated the maximum permissible speeds of burning reactor graphite (for example, RBMK graphite) for areas with different conditions of agricultural activities. (authors)« less
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
Propane spectral resolution enhancement by the maximum entropy method
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Small Rayed Crater Ejecta Retention Age Calculated from Current Crater Production Rates on Mars
NASA Technical Reports Server (NTRS)
Calef, F. J. III; Herrick, R. R.; Sharpton, V. L.
2011-01-01
Ejecta from impact craters, while extant, records erosive and depositional processes on their surfaces. Estimating ejecta retention age (Eret), the time span when ejecta remains recognizable around a crater, can be applied to estimate the timescale that surface processes operate on, thereby obtaining a history of geologic activity. However, the abundance of sub-kilometer diameter (D) craters identifiable in high resolution Mars imagery has led to questions of accuracy in absolute crater dating and hence ejecta retention ages (Eret). This research calculates the maximum Eret for small rayed impact craters (SRC) on Mars using estimates of the Martian impactor flux adjusted for meteorite ablation losses in the atmosphere. In addition, we utilize the diameter-distance relationship of secondary cratering to adjust crater counts in the vicinity of the large primary crater Zunil.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richins, W.D.; Snow, S.D.; Miller, G.K.
1995-12-01
Some motor operated valves now have higher torque switch settings due to regulatory requirements to ensure valve operability with appropriate margins at design basis conditions. Verifying operability with these settings imposes higher stem loads during periodic inservice testing. These higher test loads increase stresses in the various valve internal parts which may in turn increase the fatigue usage factors. This increased fatigue is judged to be a concern primarily in the valve disks, seats, yokes, stems, and stem nuts. Although the motor operators may also have significantly increased loading, they are being evaluated by the manufacturers and are beyond themore » scope of this study. Two gate valves representative of both relatively weak and strong valves commonly used in commercial nuclear applications were selected for fatigue analyses. Detailed dimensional and test data were available for both valves from previous studies at the Idaho National Engineering Laboratory. Finite element models were developed to estimate maximum stresses in the internal parts of the valves and to identity the critical areas within the valves where fatigue may be a concern. Loads were estimated using industry standard equations for calculating torque switch settings prior and subsequent to the testing requirements of USNRC Generic Letter 89--10. Test data were used to determine both; (1) the overshoot load between torque switch trip and final seating of the disk during valve closing and (2) the stem thrust required to open the valves. The ranges of peak stresses thus determined were then used to estimate the increase in the fatigue usage factors due to the higher stem thrust loads. The usages that would be accumulated by 100 base cycles plus one or eight test cycles per year over 40 and 60 years of operation were calculated.« less
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Cancer risk estimation caused by radiation exposure during endovascular procedure
NASA Astrophysics Data System (ADS)
Kang, Y. H.; Cho, J. H.; Yun, W. S.; Park, K. H.; Kim, H. G.; Kwon, S. M.
2014-05-01
The objective of this study was to identify the radiation exposure dose of patients, as well as staff caused by fluoroscopy for C-arm-assisted vascular surgical operation and to estimate carcinogenic risk due to such exposure dose. The study was conducted in 71 patients (53 men and 18 women) who had undergone vascular surgical intervention at the division of vascular surgery in the University Hospital from November of 2011 to April of 2012. It had used a mobile C-arm device and calculated the radiation exposure dose of patient (dose-area product, DAP). Effective dose was measured by attaching optically stimulated luminescence on the radiation protectors of staff who participates in the surgery to measure the radiation exposure dose of staff during the vascular surgical operation. From the study results, DAP value of patients was 308.7 Gy cm2 in average, and the maximum value was 3085 Gy cm2. When converted to the effective dose, the resulted mean was 6.2 m Gy and the maximum effective dose was 61.7 milliSievert (mSv). The effective dose of staff was 3.85 mSv; while the radiation technician was 1.04 mSv, the nurse was 1.31 mSv. All cancer incidences of operator are corresponding to 2355 persons per 100,000 persons, which deemed 1 of 42 persons is likely to have all cancer incidences. In conclusion, the vascular surgeons should keep the radiation protection for patient, staff, and all participants in the intervention in mind as supervisor of fluoroscopy while trying to understand the effects by radiation by themselves to prevent invisible danger during the intervention and to minimize the harm.
Studies in High Current Density Ion Sources for Heavy Ion Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon-Golcher, Edwin
This dissertation develops diverse research on small (diameter ~ few mm), high current density (J ~ several tens of mA/cm 2) heavy ion sources. The research has been developed in the context of a programmatic interest within the Heavy Ion Fusion (HIF) Program to explore alternative architectures in the beam injection systems that use the merging of small, bright beams. An ion gun was designed and built for these experiments. Results of average current density yield (
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
NASA Astrophysics Data System (ADS)
Ambarita, H.; Siahaan, A. S.; Kawai, H.; Daimaruya, M.
2018-02-01
In the last decade, the demand for delayed coking capacity has been steadily increasing. The trend in the past 15 to 20 years has been for operators to try to maximize the output of their units by reducing cycle times. This mode of operation can result in very large temperature gradients within the drums during preheating stage and even more so during the quench cycle. This research provide the optimization estimation of fatigue life due to each for the absence of preheating stage and cutting stage. In the absence of preheating stage the decreasing of fatigue life is around 19% and the increasing of maximum stress in point 5 of shell-to-skirt junction is around 97 MPa. However for the absence of cutting stage it was found that is more severe compare to normal cycle. In this adjustment fatigue life reduce around 39% and maximum stress is increased around 154 MPa. It can concluded that for cycle optimization, eliminating preheating stage possibly can become an option due to the increasing demand of delayed coking process.
Han, Wei; Wang, Bing; Zhou, Yan; Wang, De-Xin; Wang, Yan; Yue, Li-Ran; Li, Yong-Feng; Ren, Nan-Qi
2012-04-01
A novel continuous mixed immobilized sludge reactor (CMISR) containing activated carbon as support carrier was used for fermentative hydrogen production from molasses wastewater. When the CMISR system operated at the conditions of influent COD of 2000-6000mg/L, hydraulic retention time (HRT) of 6h and temperature of 35°C, stable ethanol type fermentation was formed after 40days operation. The H(2) content in biogas and chemical oxygen demand (COD) removal were estimated to be 46.6% and 13%, respectively. The effects of organic loading rates (OLRs) on the CMISR hydrogen production system were also investigated. It was found that the maximum hydrogen production rate of 12.51mmol/hL was obtained at OLR of 32kg/m(3)d and the maximum hydrogen yield by substrate consumed of 130.57mmol/mol happened at OLR of 16kg/m(3)d. Therefore, the continuous mixed immobilized sludge reactor (CMISR) could be a promising immobilized system for fermentative hydrogen production. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Melnikov, A. A.; Kostishin, V. G.; Alenkov, V. V.
2017-05-01
Real operating conditions of a thermoelectric cooling device are in the presence of thermal resistances between thermoelectric material and a heat medium or cooling object. They limit performance of a device and should be considered when modeling. Here we propose a dimensionless mathematical steady state model, which takes them into account. Analytical equations for dimensionless cooling capacity, voltage, and coefficient of performance (COP) depending on dimensionless current are given. For improved accuracy a device can be modeled with use of numerical or combined analytical-numerical methods. The results of modeling are in acceptable accordance with experimental results. The case of zero temperature difference between hot and cold heat mediums at which the maximum cooling capacity mode appears is considered in detail. Optimal device parameters for maximal cooling capacity, such as fraction of thermal conductance on the cold side y, fraction of current relative to maximal j' are estimated in range of 0.38-0.44 and 0.48-0.95, respectively, for dimensionless conductance K' = 5-100. Also, a method for determination of thermal resistances of a thermoelectric cooling system is proposed.
New Results from the Solar Maximum Mission/Bent Crystal Spectrometer
NASA Astrophysics Data System (ADS)
Rapley, C. G.; Sylwester, J.; Phillips, K. J. H.
2017-04-01
The Bent Crystal Spectrometer (BCS) onboard the NASA Solar Maximum Mission was part of the X-ray Polychromator, which observed numerous flares and bright active regions from February to November 1980, when operation was suspended as a result of the failure of the spacecraft fine-pointing system. Observations resumed following the Space Shuttle SMM Repair Mission in April 1984 and continued until November 1989. BCS spectra have been widely used in the past to obtain temperatures, emission measures, and turbulent and bulk flows during flares, as well as element abundances. Instrumental details including calibration factors not previously published are given here, and the in-orbit performance of the BCS is evaluated. Some significant changes during the mission are described, and recommendations for future instrumentation are made. Using improved estimates for the instrument parameters and operational limits, it is now possible to obtain de-convolved calibrated spectra that show finer detail than before, providing the means for improved interpretation of the physics of the emitting plasmas. The results indicate how historical archived data can be re-used to obtain enhanced and new, scientifically valuable results.
Optimum quantum receiver for detecting weak signals in PAM communication systems
NASA Astrophysics Data System (ADS)
Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar
2017-09-01
This paper deals with the modeling of an optimum quantum receiver for pulse amplitude modulator (PAM) communication systems. The information bearing sequence {I_k}_{k=0}^{N-1} is estimated using the maximum likelihood (ML) method. The ML method is based on quantum mechanical measurements of an observable X in the Hilbert space of the quantum system at discrete times, when the Hamiltonian of the system is perturbed by an operator obtained by modulating a potential V with a PAM signal derived from the information bearing sequence {I_k}_{k=0}^{N-1}. The measurement process at each time instant causes collapse of the system state to an observable eigenstate. All probabilities of getting different outcomes from an observable are calculated using the perturbed evolution operator combined with the collapse postulate. For given probability densities, calculation of the mean square error evaluates the performance of the receiver. Finally, we present an example involving estimating an information bearing sequence that modulates a quantum electromagnetic field incident on a quantum harmonic oscillator.
Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train
NASA Astrophysics Data System (ADS)
Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi
A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
Fission yield and criticality excursion code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
2000-06-30
The ANSI/ANS 8.3 standard allows a maximum yield not to exceed 2 x 10 fissions to calculate requiring the alarm system to be effective. It is common practice to use this allowance or to develop some other yield based on past criticality accident history or excursion experiments. The literature on the subject of yields discusses maximum yields larger and somewhat smaller than the ANS 8.3 permissive value. The ability to model criticality excursions and vary the various parameters to determine a credible maximum yield for operational specific cases has been available for some time but is not in common usemore » by criticality safety specialists. The topic of yields for various solution, metal, oxide powders, etc. in various geometry's and containers has been published by laboratory specialists or university staff and students for many decades but have not been available to practitioners. The need for best-estimate calculations of fission yields with a well-validated criticality excursion code has long been recognized. But no coordinated effort has been made so far to develop a generalized and well-validated excursion code for different types of systems. In this paper, the current practices to estimate fission yields are summarized along with its shortcomings for the 12-Rad zone (at SRS) and Criticality Alarm System (CAS) calculations. Finally the need for a user-friendly excursion code is reemphasized.« less
Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups
NASA Astrophysics Data System (ADS)
Kenna, Ralph; Berche, Bertrand
Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
NASA Astrophysics Data System (ADS)
Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario
2018-01-01
Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.
Estimating Phenomenological Parameters in Multi-Assets Markets
NASA Astrophysics Data System (ADS)
Raffaelli, Giacomo; Marsili, Matteo
Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, D.A.
1996-06-01
This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. Thismore » supplement is the last report in the NUREG/CR-2850 series.« less
NASA Technical Reports Server (NTRS)
Thadani, S. G.
1977-01-01
The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth
2017-01-01
Introduction To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. Methods A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. Mean estimates were weighted by our confidence on the pooled measurements. Results The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for RF sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For ELF sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (HVTL-hotsticks), while magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (TIG welding). Conclusion The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents. PMID:27827378
A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.
ERIC Educational Resources Information Center
Wood, Terry M.; Safrit, Margaret J.
1987-01-01
A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
1992 Environmental monitoring report, Sandia National Laboratories, Albuquerque, New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Culp, T.; Cox, W.; Hwang, H.
1993-09-01
This 1992 report contains monitoring data from routine radiological and nonradiological environmental surveillance activities. summaries of significant environmental compliance programs in progress, such as National Environmental Policy Act documentation, environmental permits, envirorunental restoration, and various waste management programs for Sandia National Laboratories in Albuquerque, New Mexico, are included. The maximum offsite dose impact was calculated to be 0.0034 millirem. The total population within a 50-mile radius of Sandia National Laboratories/New Mexico received an estimated collective dose of 0.019 person-rem during 1992 from the laboratories` operations. As in the previous year, the 1992 operations at Sandia National Laboratories/New Mexico had nomore » discernible impact on the general public or on the environment.« less
Mesospheric radar wind comparisons at high and middle southern latitudes
NASA Astrophysics Data System (ADS)
Reid, Iain M.; McIntosh, Daniel L.; Murphy, Damian J.; Vincent, Robert A.
2018-05-01
We compare hourly averaged neutral winds derived from two meteor radars operating at 33.2 and 55 MHz to estimate the errors in these measurements. We then compare the meteor radar winds with those from a medium-frequency partial reflection radar operating at 1.94 MHz. These three radars are located at Davis Station, Antarctica. We then consider a middle-latitude 55 MHz meteor radar wind comparison with a 1.98 MHz medium-frequency partial reflection radar to determine how representative the Davis results are. At both sites, the medium-frequency radar winds are clearly underestimated, and the underestimation increases from 80 km to the maximum height of 98 km. Correction factors are suggested for these results.[Figure not available: see fulltext.
Back-Face Strain for Monitoring Stable Crack Extension in Precracked Flexure Specimens
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Ghosn, Louis J.
2010-01-01
Calibrations relating back-face strain to crack length in precracked flexure specimens were developed for different strain gage sizes. The functions were verified via experimental compliance measurements of notched and precracked ceramic beams. Good agreement between the functions and experiments occurred, and fracture toughness was calculated via several operational methods: maximum test load and optically measured precrack length; load at 2 percent crack extension and optical precrack length; maximum load and back-face strain crack length. All the methods gave vary comparable results. The initiation toughness, K(sub Ii) , was also estimated from the initial compliance and load.The results demonstrate that stability of precracked ceramics specimens tested in four-point flexure is a common occurrence, and that methods such as remotely-monitored load-point displacement are only adequate for detecting stable extension of relatively deep cracks.
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false What is the maximum noise... SERVICE, DEPARTMENT OF THE INTERIOR BOATING AND WATER USE ACTIVITIES § 3.15 What is the maximum noise level for the operation of a vessel? (a) A person may not operate a vessel at a noise level exceeding...
36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false What is the maximum noise... SERVICE, DEPARTMENT OF THE INTERIOR BOATING AND WATER USE ACTIVITIES § 3.15 What is the maximum noise level for the operation of a vessel? (a) A person may not operate a vessel at a noise level exceeding...
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
In Vivo potassium-39 NMR spectra by the burg maximum-entropy method
NASA Astrophysics Data System (ADS)
Uchiyama, Takanori; Minamitani, Haruyuki
The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Analysis of distortion data from TF30-P-3 mixed compression inlet test
NASA Technical Reports Server (NTRS)
King, R. W.; Schuerman, J. A.; Muller, R. G.
1976-01-01
A program was conducted to reduce and analyze inlet and engine data obtained during testing of a TF30-P-3 engine operating behind a mixed compression inlet. Previously developed distortion analysis techniques were applied to the data to assist in the development of a new distortion methodology. Instantaneous distortion techniques were refined as part of the distortion methodology development. A technique for estimating maximum levels of instantaneous distortion from steady state and average turbulence data was also developed as part of the program.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
1982-01-01
second) Dia propeller diameter (expressed in inches) T°F air temperature in degrees Farenheit T°C air temperature in degrees Celsius T:dBA total dBA...eMpiriC31 function to the absolute noise level ordinate. The term 240 log ( MH is the most sensitive and important part of the equation. The constant (240...standard day, zero wind, dry, zero gradient runway, at a sea level airport. 2. All aircraft operate at maximum takeoff gross weight. 3. All aircraft climb
A comparison of the environmental impact of different AOPs: risk indexes.
Giménez, Jaime; Bayarri, Bernardí; González, Óscar; Malato, Sixto; Peral, José; Esplugas, Santiago
2014-12-31
Today, environmental impact associated with pollution treatment is a matter of great concern. A method is proposed for evaluating environmental risk associated with Advanced Oxidation Processes (AOPs) applied to wastewater treatment. The method is based on the type of pollution (wastewater, solids, air or soil) and on materials and energy consumption. An Environmental Risk Index (E), constructed from numerical criteria provided, is presented for environmental comparison of processes and/or operations. The Operation Environmental Risk Index (EOi) for each of the unit operations involved in the process and the Aspects Environmental Risk Index (EAj) for process conditions were also estimated. Relative indexes were calculated to evaluate the risk of each operation (E/NOP) or aspect (E/NAS) involved in the process, and the percentage of the maximum achievable for each operation and aspect was found. A practical application of the method is presented for two AOPs: photo-Fenton and heterogeneous photocatalysis with suspended TiO2 in Solarbox. The results report the environmental risks associated with each process, so that AOPs tested and the operations involved with them can be compared.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
A Flight Evaluation and Analysis of the Effect of Icing Conditions on the ZPG-2 Airship
NASA Technical Reports Server (NTRS)
Lewis, Willilam; Perkins, Porter J., Jr.
1958-01-01
A series of test flights was conducted by the U. S. Navy over a 3- year period to evaluate the effects of icing on the operation of the ZPG-2 airship. In supercooled. clouds, ice formed only on the forward edges of small protuberances and wires and presented no serious hazard to operation. Ice accretions of the glaze type which occurred in conditions described as freezing drizzle adversely affected various components to a somewhat greater extent. The results indicated, a need for protection of certain components such as antennas, propellers, and certain parts of the control system. The tests showed that icing of the large surface of the envelope occurred only in freezing rain or drizzle. Because of the infrequent occurrence of these conditions, the potential maximum severity could not be estimated from the test results. The increases in heaviness caused by icing in freezing rain and drizzle were substantial, but well within the operational capabilities of the airship. In order to estimate the potential operational significance of icing in freezing rain, theoretical calculations were used to estimate: (1) the rate of icing as a function of temperature and rainfall intensity, (2) the climatological probability of occurrence of various combinations of these variables, and (3) the significance of the warming influence of the ocean in alleviating freezing-rain conditions. The results of these calculations suggest that, although very heavy icing rates are possible in combinations of low temperature and high rainfall rate, the occurrence of such conditions is very infrequent in coastal areas and virtually impossible 200 or 300 miles offshore.
Duckworth, Robert C.; Kidder, Michelle K.; Aytug, Tolga; ...
2018-02-27
We report that for nuclear power plants (NPPs) considering second license renewal for operation beyond 60 years, knowledge of long-term operation, condition monitoring, and viability for the reactor components including reactor pressure vessel, concrete structures, and cable systems is essential. Such knowledge will provide NPP owners/operators with a basis for predicting performance and estimating the costs associated with monitoring or replacement programs for the affected systems. For cable systems that encompass a wide variety of materials, manufacturers, and in-plant locations, accelerated aging of harvested cable jacket and insulation can provide insight into a remaining useful life and methods for monitoring.more » Accelerated thermal aging in air at temperatures between 80°C and 120°C was conducted on a multiconductor control rod drive mechanism cable manufactured by Boston Insulated Wire (BIW). The cable, which had been in service for over 30 years, was jacketed with Hypalon and insulated with ethylene propylene rubber. From elongation at break (EAB) measurements and supporting Arrhenius analysis of the jacket material, an activation energy of 97.84 kJ/mol was estimated, and the time to degradation, as represented by 50% EAB at the expected maximum operating temperature of 45°C, was estimated to be 80 years. These values were slightly below previous measurements on similar BIW Hypalon cable jacket and could be attributed to either in-service degradation or variations in material properties from production variations. Lastly, results from indenter modulus measurements and Fourier transform infrared spectroscopy suggest possible markers that could be beneficial in monitoring cable conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duckworth, Robert C.; Kidder, Michelle K.; Aytug, Tolga
We report that for nuclear power plants (NPPs) considering second license renewal for operation beyond 60 years, knowledge of long-term operation, condition monitoring, and viability for the reactor components including reactor pressure vessel, concrete structures, and cable systems is essential. Such knowledge will provide NPP owners/operators with a basis for predicting performance and estimating the costs associated with monitoring or replacement programs for the affected systems. For cable systems that encompass a wide variety of materials, manufacturers, and in-plant locations, accelerated aging of harvested cable jacket and insulation can provide insight into a remaining useful life and methods for monitoring.more » Accelerated thermal aging in air at temperatures between 80°C and 120°C was conducted on a multiconductor control rod drive mechanism cable manufactured by Boston Insulated Wire (BIW). The cable, which had been in service for over 30 years, was jacketed with Hypalon and insulated with ethylene propylene rubber. From elongation at break (EAB) measurements and supporting Arrhenius analysis of the jacket material, an activation energy of 97.84 kJ/mol was estimated, and the time to degradation, as represented by 50% EAB at the expected maximum operating temperature of 45°C, was estimated to be 80 years. These values were slightly below previous measurements on similar BIW Hypalon cable jacket and could be attributed to either in-service degradation or variations in material properties from production variations. Lastly, results from indenter modulus measurements and Fourier transform infrared spectroscopy suggest possible markers that could be beneficial in monitoring cable conditions.« less
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge rate Continuous 1×hour ✔ ✔ ✔ Maximum fabric filter inlet temperature Continuous 1×minute...
Code of Federal Regulations, 2013 CFR
2013-07-01
... filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge rate Continuous 1×hour ✔ ✔ ✔ Maximum fabric filter inlet temperature Continuous 1×minute...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Performance analysis for minimally nonlinear irreversible refrigerators at finite cooling power
NASA Astrophysics Data System (ADS)
Long, Rui; Liu, Zhichun; Liu, Wei
2018-04-01
The coefficient of performance (COP) for general refrigerators at finite cooling power have been systematically researched through the minimally nonlinear irreversible model, and its lower and upper bounds in different operating regions have been proposed. Under the tight coupling conditions, we have calculated the universal COP bounds under the χ figure of merit in different operating regions. When the refrigerator operates in the region with lower external flux, we obtained the general bounds (0 < ε <(√{ 9 + 8εC } - 3) / 2) under the χ figure of merit. We have also calculated the universal bounds for maximum gain in COP under different operating regions to give a further insight into the COP gain with the cooling power away from the maximum one. When the refrigerator operates in the region located between maximum cooling power and maximum COP with lower external flux, the upper bound for COP and the lower bound for relative gain in COP present large values, compared to a relative small loss from the maximum cooling power. If the cooling power is the main objective, it is desirable to operate the refrigerator at a slightly lower cooling power than at the maximum one, where a small loss in the cooling power induces a much larger COP enhancement.
Online estimation of internal stack temperatures in solid oxide fuel cell power generating units
NASA Astrophysics Data System (ADS)
Dolenc, B.; Vrečko, D.; Juričić, Ɖ.; Pohjoranta, A.; Pianese, C.
2016-12-01
Thermal stress is one of the main factors affecting the degradation rate of solid oxide fuel cell (SOFC) stacks. In order to mitigate the possibility of fatal thermal stress, stack temperatures and the corresponding thermal gradients need to be continuously controlled during operation. Due to the fact that in future commercial applications the use of temperature sensors embedded within the stack is impractical, the use of estimators appears to be a viable option. In this paper we present an efficient and consistent approach to data-driven design of the estimator for maximum and minimum stack temperatures intended (i) to be of high precision, (ii) to be simple to implement on conventional platforms like programmable logic controllers, and (iii) to maintain reliability in spite of degradation processes. By careful application of subspace identification, supported by physical arguments, we derive a simple estimator structure capable of producing estimates with 3% error irrespective of the evolving stack degradation. The degradation drift is handled without any explicit modelling. The approach is experimentally validated on a 10 kW SOFC system.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Szyperski, Piotr D
2018-06-01
The purpose of this research was to evaluate the applicability of the fractal dimension (FD) estimators to assess lateral shearing interferometric (LSI) measurements of tear film surface quality. Retrospective recordings of tear film measured with LSI were used: 69 from healthy subjects and 41 from patients diagnosed with dry eye syndrome. Five surface quality descriptors were considered, four based on FD and a previously reported descriptor operating in a spatial frequency domain (M 2 ), presenting temporal kinetics of post-blink tear film. A set of 12 regression parameters has been extracted and analyzed for classification purposes. The classifiers are assessed in terms of receiver operating characteristics and areas under their curves (AUC). Also, the computational loads are estimated. The maximum AUC of 82.4% was achieved for M 2 , closely followed by the binary box-counting (BBC) FD estimator with AUC=78.6%. For all descriptors, statistically significant differences between the subject groups were found (p<0.05). The BBC FD estimator was characterized with the highest empirical computational efficiency that was about 30% faster than that of M 2 , while that based on the differential box-counting exhibited the lowest efficiency (4.5 times slower than the best one). Concluding, FD estimators can be utilized for quantitative assessment of tear film kinetics. They provide a viable alternative to previously used spectral counter parameters, and at the same time allow higher computational efficiency.
Lead Coolant Test Facility Systems Design, Thermal Hydraulic Analysis and Cost Estimate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soli Khericha; Edwin Harvego; John Svoboda
2012-01-01
The Idaho National Laboratory prepared a preliminary technical and functional requirements (T&FR), thermal hydraulic design and cost estimate for a lead coolant test facility. The purpose of this small scale facility is to simulate lead coolant fast reactor (LFR) coolant flow in an open lattice geometry core using seven electrical rods and liquid lead or lead-bismuth eutectic coolant. Based on review of current world lead or lead-bismuth test facilities and research needs listed in the Generation IV Roadmap, five broad areas of requirements were identified as listed: (1) Develop and Demonstrate Feasibility of Submerged Heat Exchanger; (2) Develop and Demonstratemore » Open-lattice Flow in Electrically Heated Core; (3) Develop and Demonstrate Chemistry Control; (4) Demonstrate Safe Operation; and (5) Provision for Future Testing. This paper discusses the preliminary design of systems, thermal hydraulic analysis, and simplified cost estimate. The facility thermal hydraulic design is based on the maximum simulated core power using seven electrical heater rods of 420 kW; average linear heat generation rate of 300 W/cm. The core inlet temperature for liquid lead or Pb/Bi eutectic is 4200 C. The design includes approximately seventy-five data measurements such as pressure, temperature, and flow rates. The preliminary estimated cost of construction of the facility is $3.7M (in 2006 $). It is also estimated that the facility will require two years to be constructed and ready for operation.« less
CFD Applications in Support of the Space Shuttle Risk Assessment
NASA Technical Reports Server (NTRS)
Baum, Joseph D.; Mestreau, Eric; Luo, Hong; Sharov, Dmitri; Fragola, Joseph; Loehner, Rainald; Cook, Steve (Technical Monitor)
2000-01-01
The paper describes a numerical study of a potential accident scenario of the space shuttle, operating at the same flight conditions as flight 51L, the Challenger accident. The interest in performing this simulation is derived by evidence that indicates that the event itself did not exert large enough blast loading on the shuttle to break it apart. Rather, the quasi-steady aerodynamic loading on the damaged, unbalance vehicle caused the break-up. Despite the enormous explosive potential of the shuttle total fuel load (both liquid and solid), the post accident explosives working group estimated the maximum energy involvement to be equivalent to about five hundreds of pounds of TNT. This understanding motivated the simulation described here. To err on the conservative side, we modeled the event as an explosion, and used the maximum energy estimate. We modeled the transient detonation of a 500 lbs spherical charge of TNT, placed at the main engine, and the resulting blast wave propagation about the complete stack. Tracking of peak pressures and impulses at hundreds of locations on the vehicle surface indicate that the blast load was insufficient to break the vehicle, hence demonstrating likely crew survivability through such an event.
Study of dynamics of X-14B VTOL aircraft
NASA Technical Reports Server (NTRS)
Loscutoff, W. V.; Mitchiner, J. L.; Roesener, R. A.; Seevers, J. A.
1973-01-01
Research was initiated to investigate certain facets of modern control theory and their integration with a digital computer to provide a tractable flight control system for a VTOL aircraft. Since the hover mode is the most demanding phase in the operation of a VTOL aircraft, the research efforts were concentrated in this mode of aircraft operation. Research work on three different aspects of the operation of the X-14B VTOL aircraft is discussed. A general theory for optimal, prespecified, closed-loop control is developed. The ultimate goal was optimal decoupling of the modes of the VTOL aircraft to simplify the pilot's task of handling the aircraft. Modern control theory is used to design deterministic state estimators which provide state variables not measured directly, but which are needed for state variable feedback control. The effect of atmospheric turbulence on the X-14B is investigated. A maximum magnitude gust envelope within which the aircraft could operate stably with the available control power is determined.
Determination of the combustion behavior for pure components and mixtures using a 20-liter sphere
NASA Astrophysics Data System (ADS)
Mashuga, Chad Victor
1999-11-01
The safest method to prevent fires and explosions of flammable vapors is to prevent the existence of flammable mixtures in the first place. This methodology requires detailed knowledge of the flammability region as a function of the fuel, oxygen, and nitrogen concentrations. A triangular flammability diagram is the most useful tool to display the flammability region, and to determine if a flammable mixture is present during plant operations. An automated apparatus for assessing the flammability region and for determining the potential effect of confined fuel-air explosions is described. Data derived from the apparatus included the limits of combustion, maximum combustion pressure, and the deflagration index, or KG. Accurate measurement of these parameters can be influenced by numerous experimental conditions, including igniter energy, humidity and gas composition. Gas humidity had a substantial effect on the deflagration index, but had little effect on the maximum combustion pressure. Small changes in gas compositions had a greater effect on the deflagration index than the maximum combustion pressure. Both the deflagration indices and the maximum combustion pressure proved insensitive to the range of igniter energies examined. Estimation of flammability limits using a calculated adiabatic flame temperature (CAFT) method is demonstrated. The CAFT model is compared with the extensive experimental data from this work for methane, ethylene and a 50/50 mixture of methane and ethylene. The CAFT model compares well to methane and ethylene throughout the flammability zone when using a 1200K threshold temperature. Deviations between the method and the experimental data occurs in the fuel rich region. For the 50/50 fuel mixture the CAFT deviates only in the fuel rich region---the inclusion of carbonaceous soot as one of the equilibrium products improved the fit. Determination of burning velocities from a spherical flame model utilizing the extensive pressure---time data was also completed. The burning velocities determined compare well to other investigators using this method. The data collected for the methane/ethylene mixture was used to evaluate mixing rules for the flammability limits, maximum combustion pressure, deflagration index, and burning velocity. These rules attempt to predict the behavior of fuel mixtures from pure component data. Le Chatelier's law and averaging both work well for predicting the flammability boundary in the fuel lean region and for mixtures of inerted fuel and air. Both methods underestimate the flammability boundary in the fuel rich region. For a mixture of methane and ethylene, we were unable to identify mixing rules for estimating the maximum combustion pressure and the burning velocity from pure component data. Averaging the deflagration indices for fuel air mixtures did provide a adequate estimation of the mixture behavior. Le Chatelier's method overestimated the maximum deflagration index in air but provided a satisfactory estimation in the extreme fuel lean and rich regions.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy.
Lorenzen, Ebbe L; Brink, Carsten; Taylor, Carolyn W; Darby, Sarah C; Ewertz, Marianne
2016-04-01
We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. Three tangential radiotherapy regimens were reconstructed using CT-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. For left-sided breast cancer, mean heart dose estimated from individual CT-scans varied from <1Gy to >8Gy, and maximum dose from 5 to 50Gy for all three regimens, so that estimates based only on regimen had substantial uncertainty. When maximum heart distance was taken into account, the uncertainty was reduced and was comparable to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always <1Gy and maximum dose always <5Gy for all three regimens. The use of stored individual simulator films provides a method for estimating heart doses in left-tangential radiotherapy for breast cancer that is almost as accurate as estimates based on individual CT-scans. Copyright © 2016. Published by Elsevier Ireland Ltd.
Lim, Tau Meng; Cheng, Shanbao; Chua, Leok Poh
2009-07-01
Axial flow blood pumps are generally smaller as compared to centrifugal pumps. This is very beneficial because they can provide better anatomical fit in the chest cavity, as well as lower the risk of infection. This article discusses the design, levitated responses, and parameter estimation of the dynamic characteristics of a compact hybrid magnetic bearing (HMB) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet brushless and sensorless motor. It is levitated by two HMBs at both ends in five degree of freedom with proportional-integral-derivative controllers, among which four radial directions are actively controlled and one axial direction is passively controlled. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMB system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air-in both the radial and axial directions. Experimental estimation showed that the dynamic characteristics of the HMB system are dominated by the frequency-dependent stiffness coefficients. By injecting a multifrequency excitation force signal onto the rotor through the HMBs, it is noticed in the experimental results the maximum displacement linear operating range is 20% of the static eccentricity with respect to the rotor and stator gap clearance. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamic properties under normal operating conditions with fluid.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-16
... Tests and Inspections for Compliance With Maximum Authorized Train Speeds and Other Speed Restrictions... safety advisory; Operational tests and inspections for compliance with maximum authorized train speeds and other speed restrictions. SUMMARY: FRA is issuing Safety Advisory 2013-08 to stress to railroads...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 3 2012-10-01 2012-10-01 false Additional construction requirements for steel pipe using alternative maximum allowable operating pressure. 192.328 Section 192.328 Transportation... Lines and Mains § 192.328 Additional construction requirements for steel pipe using alternative maximum...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 3 2014-10-01 2014-10-01 false Additional construction requirements for steel pipe using alternative maximum allowable operating pressure. 192.328 Section 192.328 Transportation... Lines and Mains § 192.328 Additional construction requirements for steel pipe using alternative maximum...
Gyro and accelerometer failure detection and identification in redundant sensor systems
NASA Technical Reports Server (NTRS)
Potter, J. E.; Deckert, J. C.
1972-01-01
Algorithms for failure detection and identification for redundant noncolinear arrays of single degree of freedom gyros and accelerometers are described. These algorithms are optimum in the sense that detection occurs as soon as it is no longer possible to account for the instrument outputs as the outputs of good instruments operating within their noise tolerances, and identification occurs as soon as it is true that only a particular instrument failure could account for the actual instrument outputs within the noise tolerance of good instruments. An estimation algorithm is described which minimizes the maximum possible estimation error magnitude for the given set of instrument outputs. Monte Carlo simulation results are presented for the application of the algorithms to an inertial reference unit consisting of six gyros and six accelerometers in two alternate configurations.
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
ERIC Educational Resources Information Center
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan
2018-01-10
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.
Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il “Dan”
2018-01-01
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. PMID:29320414
Two-Dimensional Analysis of Conical Pulsed Inductive Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, A. K.; Polzin, K. A.; Emsellem, G. D.
2011-01-01
A model of the maximum achievable exhaust velocity of a conical theta pinch pulsed inductive thruster is presented. A semi-empirical formula relating coil inductance to both axial and radial current sheet location is developed and incorporated into a circuit model coupled to a momentum equation to evaluate the effect of coil geometry on the axial directed kinetic energy of the exhaust. Inductance measurements as a function of the axial and radial displacement of simulated current sheets from four coils of different geometries are t to a two-dimensional expression to allow the calculation of the Lorentz force at any relevant averaged current sheet location. This relation for two-dimensional inductance, along with an estimate of the maximum possible change in gas-dynamic pressure as the current sheet accelerates into downstream propellant, enables the expansion of a one-dimensional circuit model to two dimensions. The results of this two-dimensional model indicate that radial current sheet motion acts to rapidly decouple the current sheet from the driving coil, leading to losses in axial kinetic energy 10-50 times larger than estimations of the maximum available energy in the compressed propellant. The decreased available energy in the compressed propellant as compared to that of other inductive plasma propulsion concepts suggests that a recovery in the directed axial kinetic energy of the exhaust is unlikely, and that radial compression of the current sheet leads to a loss in exhaust velocity for the operating conditions considered here.
ESTIMATION OF EFFECTIVE SHEAR STRESS WORKING ON FLAT SHEET MEMBRANE USING FLUIDIZED MEDIA IN MBRs
NASA Astrophysics Data System (ADS)
Zaw, Hlwan Moe; Li, Tairi; Nagaoka, Hiroshi; Mishima, Iori
This study was aimed at estimating effective shear stress working on flat sheet membrane by the addition of fluidized media in MBRs. In both of laboratory-scale aeration tanks with and without fluidized media, shear stress variations on membrane surface and water phase velocity variations were measured and MBR operation was conducted. For the evaluation of the effective shear stress working on membrane surface to mitigate membrane surface, simulation of trans-membrane pressure increase was conducted. It was shown that the time-averaged absolute value of shear stress was smaller in the reactor with fluidized media than without fluidized media. However, due to strong turbulence in the reactor with fluidized media caused by interaction between water-phase and media and also due to the direct interaction between membrane surface and fluidized media, standard deviation of shear stress on membrane surface was larger in the reactor with fluidized media than without media. Histograms of shear stress variation data were fitted well to normal distribution curves and mean plus three times of standard deviation was defined to be a maximum shear stress value. By applying the defined maximum shear stress to a membrane fouling model, trans-membrane pressure curve in the MBR experiment was simulated well by the fouling model indicting that the maximum shear stress, not time-averaged shear stress, can be regarded as an effective shear stress to prevent membrane fouling in submerged flat-sheet MBRs.
Exploiting Non-sequence Data in Dynamic Model Learning
2013-10-01
For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
Badoer, S; Miana, P; Della Sala, S; Marchiori, G; Tandoi, V; Di Pippo, F
2015-12-01
In this study, monthly variations in biomass of ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) were analysed over a 1-year period by fluorescence in situ hybridization (FISH) at the full-scale Fusina WWTP. The nitrification capacity of the plant was also monitored using periodic respirometric batch tests and by an automated on-line titrimetric instrument (TITrimetric Automated ANalyser). The percentage of nitrifying bacteria in the plant was the highest in summer and was in the range of 10-15 % of the active biomass. The maximum nitrosation rate varied in the range 2.0-4.0 mg NH4 g(-1) VSS h(-1) (0.048-0.096 kg TKN kg(-1) VSS day(-1)): values obtained by laboratory measurements and the on-line instrument were similar and significantly correlated. The activity measurements provided a valuable tool for estimating the maximum total Kjeldahl nitrogen (TKN) loading possible at the plant and provided an early warning of whether the TKN was approaching its limiting value. The FISH analysis permitted determination of the nitrifying biomass present. The main operational parameter affecting both the population dynamics and the maximum nitrosation activity was mixed liquor volatile suspended solids (MLVSS) concentration and was negatively correlated with ammonia-oxidizing bacteria (AOB) (p = 0.029) and (NOB) (p = 0.01) abundances and positively correlated with maximum nitrosation rates (p = 0.035). Increases in concentrations led to decreases in nitrifying bacteria abundance, but their nitrosation activity was higher. These results demonstrate the importance of MLVSS concentration as key factor in the development and activity of nitrifying communities in wastewater treatment plants (WWTPs). Operational data on VSS and sludge volume index (SVI) values are also presented on 11-year basis observations.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
National and State Treatment Need and Capacity for Opioid Agonist Medication-Assisted Treatment
Campopiano, Melinda; Baldwin, Grant; McCance-Katz, Elinore
2015-01-01
Objectives. We estimated national and state trends in opioid agonist medication-assisted treatment (OA-MAT) need and capacity to identify gaps and inform policy decisions. Methods. We generated national and state rates of past-year opioid abuse or dependence, maximum potential buprenorphine treatment capacity, number of patients receiving methadone from opioid treatment programs (OTPs), and the percentage of OTPs operating at 80% capacity or more using Substance Abuse and Mental Health Services Administration data. Results. Nationally, in 2012, the rate of opioid abuse or dependence was 891.8 per 100 000 people aged 12 years or older compared with national rates of maximum potential buprenorphine treatment capacity and patients receiving methadone in OTPs of, respectively, 420.3 and 119.9. Among states and the District of Columbia, 96% had opioid abuse or dependence rates higher than their buprenorphine treatment capacity rates; 37% had a gap of at least 5 per 1000 people. Thirty-eight states (77.6%) reported at least 75% of their OTPs were operating at 80% capacity or more. Conclusions. Significant gaps between treatment need and capacity exist at the state and national levels. Strategies to increase the number of OA-MAT providers are needed. PMID:26066931
Financial Implications of Intravenous Anesthetic Drug Wastage in Operation Room
Kaniyil, Suvarna; Krishnadas, A.; Parathody, Arun Kumar; Ramadas, K. T.
2017-01-01
Background and Objectives: Anesthetic drugs and material wastage are common in operation rooms (ORs). In this era of escalating health-care expenditure, cost reduction strategies are highly relevant. The aim of this study was to assess the amount of daily intravenous anesthetic drug wastage from major ORs and to estimate its financial burden. Any preventive measures to minimize drug wastage are also looked for. Methods: It was a prospective study conducted at the major ORs of a tertiary care hospital after getting the Institutional Research Committee approval. The total amount of all drugs wasted at the end of a surgical day from each major OR was audited for five nonconsecutive weeks. Drug wasted includes the drugs leftover in the syringes unutilized and opened vials/ampoules. The total cost of the wasted drugs and average daily loss were estimated. Results: The drugs wasted in large quantities included propofol, thiopentone sodium, vecuronium, mephentermine, lignocaine, midazolam, atropine, succinylcholine, and atracurium in that order. The total cost of the wasted drugs during the study period was Rs. 59,631.49, and the average daily loss was Rs. 1987.67. The average daily cost of wasted drug was maximum for vecuronium (Rs. 699.93) followed by propofol (Rs. 662.26). Interpretation and Conclusions: Financial implications of anesthetic drug wastage can be significant. Propofol and vecuronium contributed maximum to the financial burden. Suggestions for preventive measures to minimize the wastage include education of staff and residents about the cost of drugs, emphasizing on the judicial use of costly drugs. PMID:28663611
Application of Accelerometer Data to Mars Odyssey Aerobraking and Atmospheric Modeling
NASA Technical Reports Server (NTRS)
Tolson, R. H.; Keating, G. M.; George, B. E.; Escalera, P. E.; Werner, M. R.; Dwyer, A. M.; Hanna, J. L.
2002-01-01
Aerobraking was an enabling technology for the Mars Odyssey mission even though it involved risk due primarily to the variability of the Mars upper atmosphere. Consequently, numerous analyses based on various data types were performed during operations to reduce these risk and among these data were measurements from spacecraft accelerometers. This paper reports on the use of accelerometer data for determining atmospheric density during Odyssey aerobraking operations. Acceleration was measured along three orthogonal axes, although only data from the component along the axis nominally into the flow was used during operations. For a one second count time, the RMS noise level varied from 0.07 to 0.5 mm/s2 permitting density recovery to between 0.15 and 1.1 kg per cu km or about 2% of the mean density at periapsis during aerobraking. Accelerometer data were analyzed in near real time to provide estimates of density at periapsis, maximum density, density scale height, latitudinal gradient, longitudinal wave variations and location of the polar vortex. Summaries are given of the aerobraking phase of the mission, the accelerometer data analysis methods and operational procedures, some applications to determining thermospheric properties, and some remaining issues on interpretation of the data. Pre-flight estimates of natural variability based on Mars Global Surveyor accelerometer measurements proved reliable in the mid-latitudes, but overestimated the variability inside the polar vortex.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.
Thermal analyses for initial operations of the soft x-ray spectrometer onboard the Hitomi satellite
NASA Astrophysics Data System (ADS)
Noda, Hirofumi; Mitsuda, Kazuhisa; Okamoto, Atsushi; Ezoe, Yuichiro; Ishikawa, Kumi; Fujimoto, Ryuichi; Yamasaki, Noriko; Takei, Yoh; Ohashi, Takaya; Ishisaki, Yoshitaka; Mitsuishi, Ikuyuki; Yoshida, Seiji; DiPirro, Michel; Shirron, Peter
2018-01-01
The soft x-ray spectrometer (SXS) onboard the Hitomi satellite achieved a high-energy resolution of ˜4.9 eV at 6 keV with an x-ray microcalorimeter array cooled to 50 mK. The cooling system utilizes liquid helium, confined in zero gravity by means of a porous plug (PP) phase separator. For the PP to function, the helium temperature must be kept lower than the λ point of 2.17 K in orbit. To determine the maximum allowable helium temperature at launch, taking into account the uncertainties in both the final ground operations and initial operation in orbit, we constructed a thermal mathematical model of the SXS dewar and PP vent and carried out time-series thermal simulations. Based on the results, the maximum allowable helium temperature at launch was set at 1.7 K. We also conducted a transient thermal calculation using the actual temperatures at launch as initial conditions to determine flow and cooling rates in orbit. From this, the equilibrium helium mass flow rate was estimated to be ˜34 to 42 μg/s, and the lifetime of the helium mode was predicted to be ˜3.9 to 4.7 years. This paper describes the thermal model and presents simulation results and comparisons with temperatures measured in the orbit.
NASA Astrophysics Data System (ADS)
Berisford, D. F.; Painter, T. H.; Richardson, M.; Wallach, A.; Deems, J. S.; Bormann, K. J.
2017-12-01
The Airborne Snow Observatory (ASO - http://aso.jpl.nasa.gov) uses an airborne laser scanner to map snow depth, and imaging spectroscopy to map snow albedo in order to estimate snow water equivalent and melt rate over mountainous, hydrologic basin-scale areas. Optimization of planned flight lines requires the balancing of many competing factors, including flying altitude and speed, bank angle limitation, laser pulse rate and power level, flightline orientation relative to terrain, surface optical properties, and data output requirements. These variables generally distill down to cost vs. higher resolution data. The large terrain elevation variation encountered in mountainous terrain introduces the challenge of narrow swath widths over the ridgetops, which drive tight flightline spacing and possible dropouts over the valleys due to maximum laser range. Many of the basins flown by ASO exceed 3,000m of elevation relief, exacerbating this problem. Additionally, sun angle may drive flightline orientations for higher-quality spectrometer data, which may change depending on time of day. Here we present data from several ASO missions, both operational and experimental, showing the lidar performance and accuracy limitations for a variety of operating parameters. We also discuss flightline planning strategies to maximize data density return per dollar, and a brief analysis on the effect of short turn times/steep bank angles on GPS position accuracy.
Real-time estimation system for seismic-intensity exposed-population
NASA Astrophysics Data System (ADS)
Aoi, S.; Nakamura, H.; Kunugi, T.; Suzuki, W.; Fujiwara, H.
2013-12-01
For an appropriate first-action to an earthquake, risk (damage) information evaluated in real-time are important as well as hazard (ground motion) information. To meet this need, we are developing real-time estimation system (J-RISQ) for exposed population and earthquake damage on buildings. We plan to open the web page of estimated exposed population to the public from autumn. When an earthquake occurs, seismic intensities are calculated at each observation station and sent to the DMC (Data Management Center) in different timing. For rapid estimation, the system does not wait for the data from all the stations but begins the first estimation when the number of the stations observing the seismic intensity of 2.5 or larger exceeds the threshold amount. Estimations are updated several times using all the available data at that moment. Spatial distribution of seismic intensity in 250 m meshes is estimated by the site amplification factor of surface layers and the observed data. By using this intensity distribution, the exposed population is estimated using population data of each mesh. The exposed populations for municipalities and prefectures are estimated by summing-up the exposures of included meshes for the area and are appropriately rounded taking estimation precision into consideration. The estimated intensities for major cities are shown by the histograms, which indicate the variation of the estimated values in the city together with the observed maximum intensity. The variation is mainly caused by the difference of the site amplification factors. The intensities estimated for meshes with large amplification factor are sometimes larger than the maximum value observed in the city. The estimated results are seen on the web site just after the earthquake. The results of the past earthquakes can be easily searched by keywords such as date, magnitudes, seismic intensities and source areas. The summary of the results in the one-page report of Portable Document Format is also available. This system has been experimentally operated since 2010 and has performed the estimations in real-time for more than 670 earthquakes by July of 2012. For about 75 % of these earthquakes, it takes less than one minute to send the e-mail of first estimation after receiving data from the first triggered station, and therefore, the rapidity of the system is satisfactory. To upload a PDF report form to the web site, it takes approximately additional 30 second.
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Design studies of continuously variable transmissions for electric vehicles
NASA Technical Reports Server (NTRS)
Parker, R. J.; Loewenthal, S. H.; Fischer, G. K.
1981-01-01
Preliminary design studies were performed on four continuously variable transmission (CVT) concepts for use with a flywheel equipped electric vehicle of 1700 kg gross weight. Requirements of the CVT's were a maximum torque of 450 N-m (330 lb-ft), a maximum output power of 75 kW (100 hp), and a flywheel speed range of 28,000 to 14,000 rpm. Efficiency, size, weight, cost, reliability, maintainability, and controls were evaluated for each of the four concepts which included a steel V-belt type, a flat rubber belt type, a toroidal traction type, and a cone roller traction type. All CVT's exhibited relatively high calculated efficiencies (68 percent to 97 percent) over a broad range of vehicle operating conditions. Estimated weight and size of these transmissions were comparable to or less than equivalent automatic transmission. The design of each concept was carried through the design layout stage.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Vila, Javier; Bowman, Joseph D; Figuerola, Jordi; Moriña, David; Kincl, Laurel; Richardson, Lesley; Cardis, Elisabeth
2017-07-01
To estimate occupational exposures to electromagnetic fields (EMF) for the INTEROCC study, a database of source-based measurements extracted from published and unpublished literature resources had been previously constructed. The aim of the current work was to summarize these measurements into a source-exposure matrix (SEM), accounting for their quality and relevance. A novel methodology for combining available measurements was developed, based on order statistics and log-normal distribution characteristics. Arithmetic and geometric means, and estimates of variability and maximum exposure were calculated by EMF source, frequency band and dosimetry type. The mean estimates were weighted by our confidence in the pooled measurements. The SEM contains confidence-weighted mean and maximum estimates for 312 EMF exposure sources (from 0 Hz to 300 GHz). Operator position geometric mean electric field levels for radiofrequency (RF) sources ranged between 0.8 V/m (plasma etcher) and 320 V/m (RF sealer), while magnetic fields ranged from 0.02 A/m (speed radar) to 0.6 A/m (microwave heating). For extremely low frequency sources, electric fields ranged between 0.2 V/m (electric forklift) and 11,700 V/m (high-voltage transmission line-hotsticks), whereas magnetic fields ranged between 0.14 μT (visual display terminals) and 17 μT (tungsten inert gas welding). The methodology developed allowed the construction of the first EMF-SEM and may be used to summarize similar exposure data for other physical or chemical agents.
NASA Astrophysics Data System (ADS)
Wang, W.; Lee, C.; Cochran, K. K.; Armstrong, R. A.
2016-02-01
Sinking particles play a pivotal role transferring material from the surface to the deeper ocean via the "biological pump". To quantify the extent to which these particles aggregate and disaggregate, and thus affect particle settling velocity, we constructed a box model to describe organic matter cycling. The box model was fit to chloropigment data sampled in the 2005 MedFlux project using Indented Rotating Sphere sediment traps operating in Settling Velocity (SV) mode. Because of the very different pigment compositions of phytoplankton and fecal pellets, chloropigments are useful as proxies to record particle exchange. The maximum likelihood statistical method was used to estimate particle aggregation, disaggregation, and organic matter remineralization rate constants. Eleven settling velocity categories collected by SV sediment traps were grouped into two sinking velocity classes (fast- and slow-sinking) to decrease the number of parameters that needed to be estimated. Organic matter degradation rate constants were estimated to be 1.2, 1.6, and 1.1 y^-1, which are equivalent to degradation half-lives of 0.60, 0.45, and 0.62 y^-1, at 313, 524, and 1918 m, respectively. Rate constants of chlorophyll a degradation to pheopigments (pheophorbide, pheophytin, and pyropheophorbide) were estimated to be 0.88, 0.93, and 1.2 y^-1, at 313, 524, and 1918 m, respectively. Aggregation rate constants varied little with depth, with the highest value being 0.07 y^-1 at 524 m. Disaggregation rate constants were highest at 524 m (14 y^-1) and lowest at 1918 m (9.6 y^-1)
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Hurricane Properties for KSC and Mid-Florida Coastal Sites
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Rawlins, Michael A.; Kross, Dennis A.
2000-01-01
Hurricane information and climatologies are needed at Kennedy Space Center (KSC) Florida for launch operational planning purposes during the late summer and early fall Atlantic hurricane season. Also these results are needed to be used in estimating the potential magnitudes of hurricane and tropical storm impact on coastal Florida sites when passing within 50, 100 and 400 nm of that site. Roll-backs of the Space Shuttle and other launch vehicles, on pad, are very costly when a tropical storm approaches. A decision for the vehicle to roll-back or ride-out needs to be made. Therefore the historical Atlantic basin hurricane climatological properties were generated to be used for operational planning purposes and in the estimation of potential damage to launch vehicles, supporting equipment, buildings, etc.. The historical 1885-1998 Atlantic basin hurricane data were compiled and analyzed with respect to the coastal Florida site of KSC. Statistical information generated includes hurricane and tropical storm probabilities for path, maximum wind, and lowest pressure, presented for the areas within 50, 100 and 400 nm of KSC. These statistics are then compared to similar parametric statistics for the entire Atlantic basin.
NASA Technical Reports Server (NTRS)
Fuller, H.; Demler, R.; Poulin, E.; Dantowitz, P.
1979-01-01
An evaluation was made of the potential of a steam Rankine reheat reciprocator engine to operate at high efficiency in a point-focusing distributed receiver solar thermal-electric power system. The scope of the study included the engine system and electric generator; not included was the solar collector/mirror or the steam generator/receiver. A parametric analysis of steam conditions was completed leading to the selection of 973 K 12.1 MPa as the steam temperature/pressure for a conceptual design. A conceptual design was completed for a two cylinder/ opposed engine operating at 1800 rpm directly coupled to a commercially available induction generator. A unique part of the expander design is the use of carbon/graphite piston rings to eliminate the need for using oil as an upper cylinder lubricant. The evaluation included a system weight estimate of 230 kg at the mirror focal point with the condenser mounted separately on the ground. The estimated cost of the overall system is $1932 or $90/kW for the maximum 26 kW output.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
SM-1 REACTOR VESSEL COVER AND FLANGE STRESS ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayre, M.F.
1962-02-19
The maximum stress calculated for the SMl-1 reactor vessel closure studs occurs during operation at full power. This value is 27,180 psi of which 19,800 psi is tension and 7380 psi bending. This stress does not include a stress concentration factor for effect of threads. It was eonservatively assumed the studs were initially tightened to a code allowable stress of 20,000 psi as specified in the ASME Code rather than the lesser stress obtained by the normal operating procedure. The maximum calculated stress occurs at the outside surface of the cover where the stress ranges from 318 psi in tensionmore » to 90,660 psi in compression. The alternating stress is 50,000 psi. According to the Navy Code for a stress range of 50,000 psi, the eover material ean safely undergo a maximum of 1600 cycles. It was estimated that the SM-1 will go through approximately 000 startup and shutdown cycles during a 20-yr life period, so the calculated stress is regarded as safe. For a transient eondition of 30 deg F/hr during heat-up, approximate temperature differences between the inside and outside surfaces of the cover were obtained. Temperature differentials between the inside and outside surfaces of the cover are increased by roughly 10%; above the steady state condition. More exact calculations of the transient stresses did not appear necessary siuce they would be not more than 10% greater than the steady state thermal stress. (auth)« less
Code of Federal Regulations, 2012 CFR
2012-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2012-10-01 2012-10-01 false Maximum and minimum allowable operating pressure...
Code of Federal Regulations, 2011 CFR
2011-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2011-10-01 2011-10-01 false Maximum and minimum allowable operating pressure...
Code of Federal Regulations, 2013 CFR
2013-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2013-10-01 2013-10-01 false Maximum and minimum allowable operating pressure...
Code of Federal Regulations, 2014 CFR
2014-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2014-10-01 2014-10-01 false Maximum and minimum allowable operating pressure...
Code of Federal Regulations, 2010 CFR
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating pressure...
Documentation of a deep percolation model for estimating ground-water recharge
Bauer, H.H.; Vaccaro, J.J.
1987-01-01
A deep percolation model, which operates on a daily basis, was developed to estimate long-term average groundwater recharge from precipitation. It has been designed primarily to simulate recharge in large areas with variable weather, soils, and land uses, but it can also be used at any scale. The physical and mathematical concepts of the deep percolation model, its subroutines and data requirements, and input data sequence and formats are documented. The physical processes simulated are soil moisture accumulation, evaporation from bare soil, plant transpiration, surface water runoff, snow accumulation and melt, and accumulation and evaporation of intercepted precipitation. The minimum data sets for the operation of the model are daily values of precipitation and maximum and minimum air temperature, soil thickness and available water capacity, soil texture, and land use. Long-term average annual precipitation, actual daily stream discharge, monthly estimates of base flow, Soil Conservation Service surface runoff curve numbers, land surface altitude-slope-aspect, and temperature lapse rates are optional. The program is written in the FORTRAN 77 language with no enhancements and should run on most computer systems without modifications. Documentation has been prepared so that program modifications may be made for inclusions of additional physical processes or deletion of ones not considered important. (Author 's abstract)
Estimating tree crown widths for the primary Acadian species in Maine
Matthew B. Russell; Aaron R. Weiskittel
2012-01-01
In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087
Reproducibility of isopach data and estimates of dispersal and eruption volumes
NASA Astrophysics Data System (ADS)
Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.
2012-12-01
Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.
Design of a Collapse-Mode CMUT With an Embossed Membrane for Improving Output Pressure.
Yu, Yuanyu; Pun, Sio Hang; Mak, Peng Un; Cheng, Ching-Hsiang; Wang, Jiujiang; Mak, Pui-In; Vai, Mang I
2016-06-01
Capacitive micromachined ultrasonic transducers (CMUTs) have emerged as a competitive alternative to piezoelectric ultrasonic transducers, especially in medical ultrasound imaging and therapeutic ultrasound applications, which require high output pressure. However, as compared with piezoelectric ultrasonic transducers, the output pressure capability of CMUTs remains to be improved. In this paper, a novel structure is proposed by forming an embossed vibrating membrane on a CMUT cell operating in the collapse mode to increase the maximum output pressure. By using a beam model in undamped conditions and finite-element analysis simulations, the proposed embossed structure showed improvement on the maximum output pressure of the CMUT cell when the embossed pattern was placed on the estimated location of the peak deflection. As compared with a uniform membrane CMUT cell worked in the collapse mode, the proposed CMUT cell can yield the maximum output pressure by 51.1% and 88.1% enhancement with a single embossed pattern made of Si3N4 and nickel, respectively. The maximum output pressures were improved by 34.9% (a single Si3N4 embossed pattern) and 46.7% (a single nickel embossed pattern) with the uniform membrane when the center frequencies of both original and embossed CMUT designs were similar.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
NASA Astrophysics Data System (ADS)
Archip, Neculai; Fedorov, Andriy; Lloyd, Bryn; Chrisochoides, Nikos; Golby, Alexandra; Black, Peter M.; Warfield, Simon K.
2006-03-01
A major challenge in neurosurgery oncology is to achieve maximal tumor removal while avoiding postoperative neurological deficits. Therefore, estimation of the brain deformation during the image guided tumor resection process is necessary. While anatomic MRI is highly sensitive for intracranial pathology, its specificity is limited. Different pathologies may have a very similar appearance on anatomic MRI. Moreover, since fMRI and diffusion tensor imaging are not currently available during the surgery, non-rigid registration of preoperative MR with intra-operative MR is necessary. This article presents a translational research effort that aims to integrate a number of state-of-the-art technologies for MRI-guided neurosurgery at the Brigham and Women's Hospital (BWH). Our ultimate goal is to routinely provide the neurosurgeons with accurate information about brain deformation during the surgery. The current system is tested during the weekly neurosurgeries in the open magnet at the BWH. The preoperative data is processed, prior to the surgery, while both rigid and non-rigid registration algorithms are run in the vicinity of the operating room. The system is tested on 9 image datasets from 3 neurosurgery cases. A method based on edge detection is used to quantitatively validate the results. 95% Hausdorff distance between points of the edges is used to estimate the accuracy of the registration. Overall, the minimum error is 1.4 mm, the mean error 2.23 mm, and the maximum error 3.1 mm. The mean ratio between brain deformation estimation and rigid alignment is 2.07. It demonstrates that our results can be 2.07 times more precise then the current technology. The major contribution of the presented work is the rigid and non-rigid alignment of the pre-operative fMRI with intra-operative 0.5T MRI achieved during the neurosurgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joo, Youngdo, E-mail: Ydjoo77@postech.ac.kr; Yu, Inha; Park, Insoo
After three years of upgrading work, the Pohang Light Source-II (PLS-II) is now successfully operating. The final quantitative goal of PLS-II is a top-up user-service operation with beam current of 400 mA to be completed by the end of 2014. During the beam store test up to 400 mA in the storage ring (SR), it was observed that the vacuum pressure around the radio frequency (RF) window of the superconducting cavity rapidly increases over the interlock level limiting the availability of the maximum beam current storing. Although available beam current is enhanced by setting a higher RF accelerating voltage, it is bettermore » to keep the RF accelerating voltage as low as possible in the long time top-up operation. We investigated the cause of the window vacuum pressure increment by studying the changes in the electric field distribution at the superconducting cavity and waveguide according to the beam current. In our simulation, an equivalent physical modeling was developed using a finite-difference time-domain code. The simulation revealed that the electric field amplitude at the RF window is exponentially increased as the beam current increases, thus this high electric field amplitude causes a RF breakdown at the RF window, which comes with the rapid increase of window vacuum pressure. The RF accelerating voltage of PLS-II RF system was set to 4.95 MV, which was estimated using the maximum available beam current that works as a function of RF voltage, and the top-up operation test with the beam current of 400 mA was successfully carried out.« less
NASA Astrophysics Data System (ADS)
Baklanov, A.; Mahura, A.; Sørensen, J. H.
2003-06-01
There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the ``Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release (e.g. 1 Bq) are performed. The analysis showed that the possible deposition fractions of 10-11 (Bq/m2) over the Kola Peninsula, and 10-12 - 10-13 (Bq/m2) for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.
NASA Astrophysics Data System (ADS)
Baklanov, A.; Mahura, A.; Sørensen, J. H.
2003-03-01
There are objects with some periods of higher than normal levels of risk of accidental atmospheric releases (nuclear, chemical, biological, etc.). Such accidents or events may occur due to natural hazards, human errors, terror acts, and during transportation of waste or various operations at high risk. A methodology for risk assessment is suggested and it includes two approaches: 1) probabilistic analysis of possible atmospheric transport patterns using long-term trajectory and dispersion modelling, and 2) forecast and evaluation of possible contamination and consequences for the environment and population using operational dispersion modelling. The first approach could be applied during the preparation stage, and the second - during the operation stage. The suggested methodology is applied on an example of the most important phases (lifting, transportation, and decommissioning) of the "Kursk" nuclear submarine operation. It is found that the temporal variability of several probabilistic indicators (fast transport probability fields, maximum reaching distance, maximum possible impact zone, and average integral concentration of 137Cs) showed that the fall of 2001 was the most appropriate time for the beginning of the operation. These indicators allowed to identify the hypothetically impacted geographical regions and territories. In cases of atmospheric transport toward the most populated areas, the forecasts of possible consequences during phases of the high and medium potential risk levels based on a unit hypothetical release are performed. The analysis showed that the possible deposition fractions of 1011 over the Kola Peninsula, and 10-12 - 10-13 for the remote areas of the Scandinavia and Northwest Russia could be observed. The suggested methodology may be used successfully for any potentially dangerous object involving risk of atmospheric release of hazardous materials of nuclear, chemical or biological nature.
NASA Astrophysics Data System (ADS)
Melis, Nikolaos S.; Konstantinou, Konstantinos; Kalogeras, Ioannis; Sokos, Efthimios; Tselentis, G.-Akis
2017-04-01
It is of a great importance to assess rapidly the intensity of a felt event in a highly populated environment. Rapid and reliable information plays a key role to decision making responses, by performing correctly the first steps after a felt ground shaking. Thus, it is important to accurately respond to urgent societal demand using reliable information. A strong motion array is under deployment and trial operation in the area of Patras, Greece. It combines: (a) standard accelerometric stations operated by the National Observatory of Athens, Institute of Geodynamics (NOA), (b) QCN-type USB MEMS acceleration sensors deployed in schools and (c) P-alert MEMS acceleration devices deployed in public sector buildings as well as in private dwellings. The array intends to cover the whole city of Patras and the populated suburbs. All instruments are operating in near real time and they are linked to a combined Earthworm - SeisComP3 server at NOA, Athens. Rapid intensity estimation can be also performed by the P-alert accelerometers locally, but the performance of a near real time intensity estimation system is under operation at NOA. The procedure is based on observing the maximum PGA value at each instrument and empirically estimate the corresponding intensity. The values are also fed to a SeisComP3 based ShakeMap procedure that is served at NOA and uses the scwfparam module of SeisComP3. Earthquake activity has been recorded so far from the western Corinth Gulf, the Ionian Islands and Achaia-Elia area, western Peloponnesus. The first phase involves correlation tests of collocated instruments and assessment of their performance to low intensity as well as to strongly felt events in the Patras city area. Steps of expanding the array are also under consideration, in order to cover the wider area of northwestern Peloponnesus and Ionian islands.
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2014 CFR
2014-10-01
... for certain steel pipelines. 192.620 Section 192.620 Transportation Other Regulations Relating to... STANDARDS Operations § 192.620 Alternative maximum allowable operating pressure for certain steel pipelines..., 2, or 3 location; (2) The pipeline segment is constructed of steel pipe meeting the additional...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2011 CFR
2011-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
Code of Federal Regulations, 2012 CFR
2012-10-01
... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
... profile that is dependent upon the pipelines attributes, its geographical location, design, operating... type of threats posed by the pipeline segment, including consideration of the age, design, pipe... calculation. There are several methods available for establishing MAOP or MOP. A hydrostatic pressure test...
An economic study of an advanced technology supersonic cruise vehicle
NASA Technical Reports Server (NTRS)
Smith, C. L.; Williams, L. J.
1975-01-01
A description is given of the methods used and the results of an economic study of an advanced technology supersonic cruise vehicle. This vehicle was designed for a maximum range of 4000 n.mi. at a cruise speed of Mach 2.7 and carrying 292 passengers. The economic study includes the estimation of aircraft unit cost, operating cost, and idealized cash flow and discounted cash flow return on investment. In addition, it includes a sensitivity study on the effects of unit cost, manufacturing cost, production quantity, average trip length, fuel cost, load factor, and fare on the aircraft's economic feasibility.
Effect of Background Pressure on the Plasma Oscillation Characteristics of the HiVHAc Hall Thruster
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Kamhawi, Hani; Lobbia, Robert B.; Brown, Daniel L.
2014-01-01
During a component compatibility test of the NASA HiVHAc Hall thruster, a high-speed camera and a set of high-speed Langmuir probes were implemented to study the effect of varying facility background pressure on thruster operation. The results show a rise in the oscillation frequency of the breathing mode with rising background pressure, which is hypothesized to be due to a shortening accelerationionization zone. An attempt is made to apply a simplified ingestion model to the data. The combined results are used to estimate the maximum acceptable background pressure for performance and wear testing.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Malek, H.
1978-01-01
A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.
NASA Astrophysics Data System (ADS)
Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.
2009-12-01
We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Simulation of extreme reservoir level distribution with the SCHADEX method (EXTRAFLO project)
NASA Astrophysics Data System (ADS)
Paquet, Emmanuel; Penot, David; Garavaglia, Federico
2013-04-01
The standard practice for the design of dam spillways structures and gates is to consider the maximum reservoir level reached for a given hydrologic scenario. This scenario has several components: peak discharge, flood volumes on different durations, discharge gradients etc. Within a probabilistic analysis framework, several scenarios can be associated with different return times, although a reference return level (e.g. 1000 years) is often prescribed by the local regulation rules or usual practice. Using continuous simulation method for extreme flood estimation is a convenient solution to provide a great variety of hydrological scenarios to feed a hydraulic model of dam operation: flood hydrographs are explicitly simulated by a rainfall-runoff model fed by a stochastic rainfall generator. The maximum reservoir level reached will be conditioned by the scale and the dynamics of the generated hydrograph, by the filling of the reservoir prior to the flood, and by the dam gates and spillway operation during the event. The simulation of a great number of floods will allow building a probabilistic distribution of maximum reservoir levels. A design value can be chosen at a definite return level. An alternative approach is proposed here, based on the SCHADEX method for extreme flood estimation, proposed by Paquet et al. (2006, 2013). SCHADEX is a so-called "semi-continuous" stochastic simulation method in that flood events are simulated on an event basis and are superimposed on a continuous simulation of the catchment saturation hazard using rainfall-runoff modelling. The SCHADEX process works at the study time-step (e.g. daily), and the peak flow distribution is deduced from the simulated daily flow distribution by a peak-to-volume ratio. A reference hydrograph relevant for extreme floods is proposed. In the standard version of the method, both the peak-to-volume and the reference hydrograph are constant. An enhancement of this method is presented, with variable peak-to-volume ratios and hydrographs applied to each simulated event. This allows accounting for different flood dynamics, depending on the season, the generating precipitation event, the soil saturation state, etc. In both cases, a hydraulic simulation of dam operation is performed, in order to compute the distribution of maximum reservoir levels. Results are detailed for an extreme return level, showing that a 1000 years return level reservoir level can be reached during flood events whose components (peaks, volumes) are not necessarily associated with such return level. The presentation will be illustrated by the example of a fictive dam on the Tech River at Reynes (South of France, 477 km²). This study has been carried out within the EXTRAFLO project, Task 8 (https://extraflo.cemagref.fr/). References: Paquet, E., Gailhard, J. and Garçon, R. (2006), Evolution of the GRADEX method: improvement by atmospheric circulation classification and hydrological modeling, La Houille Blanche, 5, 80-90. doi:10.1051/lhb:2006091. Paquet, E., Garavaglia, F., Garçon, R. and Gailhard, J. (2012), The SCHADEX method: a semi-continuous rainfall-runoff simulation for extreme food estimation, Journal of Hydrology, under revision
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
49 CFR 174.86 - Maximum allowable operating speed.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Handling of Placarded Rail Cars, Transport Vehicles and Freight Containers § 174.86 Maximum allowable operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in...
49 CFR 174.86 - Maximum allowable operating speed.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Handling of Placarded Rail Cars, Transport Vehicles and Freight Containers § 174.86 Maximum allowable operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in...
49 CFR 174.86 - Maximum allowable operating speed.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Handling of Placarded Rail Cars, Transport Vehicles and Freight Containers § 174.86 Maximum allowable operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in...
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
NASA Technical Reports Server (NTRS)
1948-01-01
An altitude-test-chamber investigation was conducted to determine the operational characteristics and altitude blow-out limits of a Solar afterburner in a 24C engine. At rated engine speed and maximum permissible turbine-discharge temperature, the altitude limit as determined by combustion blow-out occurred as a band of unstable operation of about 8000 feet altitude in width with maximum altitude limits from 32,000 feet at a Mach number of 0.3 to about 42,000 feet at a Mach number of 1.0. The maximum fuel-air ratio of the afterburner, as limited by maximum permissible turbine-discharge gas temperatures at rated engine speed, varied between 0.0295 and 0.0380 over a range of flight Mach numbers from 0.25 to 1.0 and at altitudes of 20,000 and 30,000 feet. Over this range of operating conditions, the fuel-air ratio at which lean blow-out occurred was from 10 to 19 percent below these maximum fuel-air ratios. Combustion was very smooth and uniform during operation; however, ignition of the burner was very difficult throughout the investigation. A failure of the flame holder after 12 hours and 15 minutes of afterburner operation resulted in termination of the investigation.
Maximum angular accuracy of pulsed laser radar in photocounting limit.
Elbaum, M; Diament, P; King, M; Edelson, W
1977-07-01
To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
Extracting volatility signal using maximum a posteriori estimation
NASA Astrophysics Data System (ADS)
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung
2014-01-01
The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes.
NASA Astrophysics Data System (ADS)
Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.
2018-03-01
Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.
3D tomographic reconstruction using geometrical models
NASA Astrophysics Data System (ADS)
Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.
1997-04-01
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Quantum demultiplexer of quantum parameter-estimation information in quantum networks
NASA Astrophysics Data System (ADS)
Xie, Yanqing; Huang, Yumeng; Wu, Yinzhong; Hao, Xiang
2018-05-01
The quantum demultiplexer is constructed by a series of unitary operators and multipartite entangled states. It is used to realize information broadcasting from an input node to multiple output nodes in quantum networks. The scheme of quantum network communication with respect to phase estimation is put forward through the demultiplexer subjected to amplitude damping noises. The generalized partial measurements can be applied to protect the transferring efficiency from environmental noises in the protocol. It is found out that there are some optimal coherent states which can be prepared to enhance the transmission of phase estimation. The dynamics of state fidelity and quantum Fisher information are investigated to evaluate the feasibility of the network communication. While the state fidelity deteriorates rapidly, the quantum Fisher information can be enhanced to a maximum value and then decreases slowly. The memory effect of the environment induces the oscillations of fidelity and quantum Fisher information. The adjustment of the strength of partial measurements is helpful to increase quantum Fisher information.
Using Atmosphere-Forest Measurements To Examine The Potential For Reduced Downwind Dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viner, B.
2015-10-13
A 2-D dispersion model was developed to address how airborne plumes interact with the forest at Savannah River Site. Parameters describing turbulence and mixing of the atmosphere within and just above the forest were estimated using measurements of water vapor or carbon dioxide concentration made at the Aiken AmeriFlux tower for a range of stability and seasonal conditions. Stability periods when the greatest amount of mixing of an airborne plume into the forest were found for 1) very unstable environments, when atmospheric turbulence is usually at a maximum, and 2) very stable environments, when the plume concentration at the forestmore » top is at a maximum and small amounts of turbulent mixing can move a substantial portion of the plume into the forest. Plume interactions with the forest during stable periods are of particular importance because these conditions are usually considered the worst-case scenario for downwind effects from a plume. The pattern of plume mixing into the forest was similar during the year except during summer when the amount of plume mixed into the forest was nearly negligible for all but stable periods. If the model results indicating increased deposition into the forest during stable conditions can be confirmed, it would allow for a reduction in the limitations that restrict facility operations while maintaining conservative estimates for downwind effects. Continuing work is planned to confirm these results as well as estimate specific deposition velocity values for use in toolbox models used in regulatory roles.« less
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems
NASA Astrophysics Data System (ADS)
Mazmuder, R. K.; Haidar, S.
1992-12-01
An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Code of Federal Regulations, 2010 CFR
2010-07-01
... rural HMIWI HMIWI a with dry scrubber followed by fabric filter HMIWI a with wet scrubber HMIWI a with dry scrubber followed by fabric filter and wet scrubber Maximum operating parameters: Maximum charge rate Once per charge Once per charge ✔ ✔ ✔ ✔ Maximum fabric filter inlet temperature Continuous Once...
NASA Astrophysics Data System (ADS)
Bargaoui, Zoubeida Kebaili; Bardossy, Andràs
2015-10-01
The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an extent of 34 to 38 km.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
ERIC Educational Resources Information Center
Beauducel, Andre; Herzberg, Philipp Yorck
2006-01-01
This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…
ERIC Educational Resources Information Center
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
40 CFR 420.134 - New source performance standards (NSPS).
Code of Federal Regulations, 2010 CFR
2010-07-01
... Source Performance Standards (NSPS) Pollutant Maximum daily 1 Maximum monthly avg. 1 TSS 0.00998 0.00465... operations. Subpart M—New Source Performance Standards (NSPS) Pollutant Maximum daily 1 Maximum monthly avg...
40 CFR 420.134 - New source performance standards (NSPS).
Code of Federal Regulations, 2011 CFR
2011-07-01
... Source Performance Standards (NSPS) Pollutant Maximum daily 1 Maximum monthly avg. 1 TSS 0.00998 0.00465... operations. Subpart M—New Source Performance Standards (NSPS) Pollutant Maximum daily 1 Maximum monthly avg...
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
Thermoelectric Energy Conversion Technology for High-Altitude Airships
NASA Technical Reports Server (NTRS)
Choi, Sang H.; Elliott, James R.; King, Glen C.; Park, Yeonjoon; Kim, Jae-Woo; Chu, Sang-Hyon
2011-01-01
The High Altitude Airship (HAA) has various application potential and mission scenarios that require onboard energy harvesting and power distribution systems. The power technology for HAA maneuverability and mission-oriented applications must come from its surroundings, e.g. solar power. The energy harvesting system considered for HAA is based on the advanced thermoelectric (ATE) materials being developed at NASA Langley Research Center. The materials selected for ATE are silicon germanium (SiGe) and bismuth telluride (Bi2Te3), in multiple layers. The layered structure of the advanced TE materials is specifically engineered to provide maximum efficiency for the corresponding range of operational temperatures. For three layers of the advanced TE materials that operate at high, medium, and low temperatures, correspondingly in a tandem mode, the cascaded efficiency is estimated to be greater than 60 percent.
Tougaard, Jakob; Henriksen, Oluf Damsgaard; Miller, Lee A
2009-06-01
Underwater noise was recorded from three different types of wind turbines in Denmark and Sweden (Middelgrunden, Vindeby, and Bockstigen-Valar) during normal operation. Wind turbine noise was only measurable above ambient noise at frequencies below 500 Hz. Total sound pressure level was in the range 109-127 dB re 1 microPa rms, measured at distances between 14 and 20 m from the foundations. The 1/3-octave noise levels were compared with audiograms of harbor seals and harbor porpoises. Maximum 1/3-octave levels were in the range 106-126 dB re 1 microPa rms. Maximum range of audibility was estimated under two extreme assumptions on transmission loss (3 and 9 dB per doubling of distance, respectively). Audibility was low for harbor porpoises extending 20-70 m from the foundation, whereas audibility for harbor seals ranged from less than 100 m to several kilometers. Behavioral reactions of porpoises to the noise appear unlikely except if they are very close to the foundations. However, behavioral reactions from seals cannot be excluded up to distances of a few hundred meters. It is unlikely that the noise reaches dangerous levels at any distance from the turbines and the noise is considered incapable of masking acoustic communication by seals and porpoises.
Statistical field estimators for multiscale simulations.
Eapen, Jacob; Li, Ju; Yip, Sidney
2005-11-01
We present a systematic approach for generating smooth and accurate fields from particle simulation data using the notions of statistical inference. As an extension to a parametric representation based on the maximum likelihood technique previously developed for velocity and temperature fields, a nonparametric estimator based on the principle of maximum entropy is proposed for particle density and stress fields. Both estimators are applied to represent molecular dynamics data on shear-driven flow in an enclosure which exhibits a high degree of nonlinear characteristics. We show that the present density estimator is a significant improvement over ad hoc bin averaging and is also free of systematic boundary artifacts that appear in the method of smoothing kernel estimates. Similarly, the velocity fields generated by the maximum likelihood estimator do not show any edge effects that can be erroneously interpreted as slip at the wall. For low Reynolds numbers, the velocity fields and streamlines generated by the present estimator are benchmarked against Newtonian continuum calculations. For shear velocities that are a significant fraction of the thermal speed, we observe a form of shear localization that is induced by the confining boundary.
Ollson, Christopher A; Whitfield Aslund, Melissa L; Knopper, Loren D; Dan, Tereza
2014-01-01
The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste (EFW) thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. In this paper we present the results of a comprehensive ecological risk assessment (ERA) for this planned facility, based on baseline sampling and site specific modeling to predict facility-related emissions, which was subsequently accepted by regulatory authorities. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and the maximum design capacity (400,000 tonnes per year). In general, calculated ecological hazard quotients (EHQs) and screening ratios (SRs) for receptors did not exceed the benchmark value (1.0). The only exceedances noted were generally due to existing baseline media concentrations, which did not differ from those expected for similar unimpacted sites in Ontario. This suggests that these exceedances reflect conservative assumptions applied in the risk assessment rather than actual potential risk. However, under predicted upset conditions at 400,000 tonnes per year (i.e., facility start-up, shutdown, and loss of air pollution control), a potential unacceptable risk was estimated for freshwater receptors with respect to benzo(g,h,i)perylene (SR=1.1), which could not be attributed to baseline conditions. Although this slight exceedance reflects a conservative worst-case scenario (upset conditions coinciding with worst-case meteorological conditions), further investigation of potential ecological risk should be performed if this facility is expanded to the maximum operating capacity in the future. © 2013.
Early disaster response in Haiti: the Israeli field hospital experience.
Kreiss, Yitshak; Merin, Ofer; Peleg, Kobi; Levy, Gad; Vinker, Shlomo; Sagi, Ram; Abargel, Avi; Bartal, Carmi; Lin, Guy; Bar, Ariel; Bar-On, Elhanan; Schwaber, Mitchell J; Ash, Nachman
2010-07-06
The earthquake that struck Haiti in January 2010 caused an estimated 230,000 deaths and injured approximately 250,000 people. The Israel Defense Forces Medical Corps Field Hospital was fully operational on site only 89 hours after the earthquake struck and was capable of providing sophisticated medical care. During the 10 days the hospital was operational, its staff treated 1111 patients, hospitalized 737 patients, and performed 244 operations on 203 patients. The field hospital also served as a referral center for medical teams from other countries that were deployed in the surrounding areas. The key factor that enabled rapid response during the early phase of the disaster from a distance of 6000 miles was a well-prepared and trained medical unit maintained on continuous alert. The prompt deployment of advanced-capability field hospitals is essential in disaster relief, especially in countries with minimal medical infrastructure. The changing medical requirements of people in an earthquake zone dictate that field hospitals be designed to operate with maximum flexibility and versatility regarding triage, staff positioning, treatment priorities, and hospitalization policies. Early coordination with local administrative bodies is indispensable.
Preliminary safety analysis of the Baita Bihor radioactive waste repository, Romania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Richard; Bond, Alex; Watson, Sarah
2007-07-01
A project funded under the European Commission's Phare Programme 2002 has undertaken an in-depth analysis of the operational and post-closure safety of the Baita Bihor repository. The repository has accepted low- and some intermediate-level radioactive waste from industry, medical establishments and research activities since 1985 and the current estimate is that disposals might continue for around another 20 to 35 years. The analysis of the operational and post-closure safety of the Baita Bihor repository was carried out in two iterations, with the second iteration resulting in reduced uncertainties, largely as a result taking into account new information on the hydrologymore » and hydrogeology of the area, collected as part of the project. Impacts were evaluated for the maximum potential inventory that might be available for disposal to Baita Bihor for a number of operational and postclosure scenarios and associated conceptual models. The results showed that calculated impacts were below the relevant regulatory criteria. In light of the assessment, a number of recommendations relating to repository operation, optimisation of repository engineering and waste disposals, and environmental monitoring were made. (authors)« less
Flint, L.E.; Flint, A.L.
2008-01-01
Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Ryo, Masahiro; Iwasaki, Yuichi; Yoshimura, Chihiro; Saavedra V., Oliver C.
2015-01-01
Alteration of the spatial variability of natural flow regimes has been less studied than that of the temporal variability, despite its ecological importance for river ecosystems. Here, we aimed to quantify the spatial patterns of flow regime alterations along a river network in the Sagami River, Japan, by estimating river discharge under natural and altered flow conditions. We used a distributed hydrological model, which simulates hydrological processes spatiotemporally, to estimate 20-year daily river discharge along the river network. Then, 33 hydrologic indices (i.e., Indicators of Hydrologic Alteration) were calculated from the simulated discharge to estimate the spatial patterns of their alterations. Some hydrologic indices were relatively well estimated such as the magnitude and timing of maximum flows, monthly median flows, and the frequency of low and high flow pulses. The accuracy was evaluated with correlation analysis (r > 0.4) and the Kolmogorov–Smirnov test (α = 0.05) by comparing these indices calculated from both observed and simulated discharge. The spatial patterns of the flow regime alterations varied depending on the hydrologic indices. For example, both the median flow in August and the frequency of high flow pulses were reduced by the maximum of approximately 70%, but these strongest alterations were detected at different locations (i.e., on the mainstream and the tributary, respectively). These results are likely caused by different operational purposes of multiple water control facilities. The results imply that the evaluation only at discharge gauges is insufficient to capture the alteration of the flow regime. Our findings clearly emphasize the importance of evaluating the spatial pattern of flow regime alteration on a river network where its discharge is affected by multiple water control facilities. PMID:26207997
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... project. The normal daily operation cycle involves pumping water from the lower reservoir to the upper... acre upper reservoir with 1,087 acre-feet of usable storage between the maximum operating elevation of... lower reservoir with 1,221 acre-feet of usable storage between the maximum operating elevation of 10,002...
Estimation of eye lens doses received by pediatric interventional cardiologists.
Alejo, L; Koren, C; Ferrer, C; Corredoira, E; Serrada, A
2015-09-01
Maximum Hp(0.07) dose to the eye lens received in a year by the pediatric interventional cardiologists has been estimated. Optically stimulated luminescence dosimeters were placed on the eyes of an anthropomorphic phantom, whose position in the room simulates the most common irradiation conditions. Maximum workload was considered with data collected from procedures performed in the Hospital. None of the maximum values obtained exceed the dose limit of 20 mSv recommended by ICRP. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95 percent response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible controller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed-loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76 percent. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76 percent. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Csank, Jeffrey T.
2015-01-01
Designing a closed-loop controller for an engine requires balancing trade-offs between performance and operability of the system. One such trade-off is the relationship between the 95% response time and minimum high-pressure compressor (HPC) surge margin (SM) attained during acceleration from idle to takeoff power. Assuming a controller has been designed to meet some specification on response time and minimum HPC SM for a mid-life (nominal) engine, there is no guarantee that these limits will not be violated as the engine ages, particularly as it reaches the end of its life. A characterization for the uncertainty in this closed-loop system due to aging is proposed that defines elliptical boundaries to estimate worst-case performance levels for a given control design point. The results of this characterization can be used to identify limiting design points that bound the possible con- troller designs yielding transient results that do not exceed specified limits in response time or minimum HPC SM. This characterization involves performing Monte Carlo simulation of the closed-loop system with controller constructed for a set of trial design points and developing curve fits to describe the size and orientation of each ellipse; a binary search procedure is then employed that uses these fits to identify the limiting design point. The method is demonstrated through application to a generic turbofan engine model in closed- loop with a simplified controller; it is found that the limit for which each controller was designed was exceeded by less than 4.76%. Extension of the characterization to another trade-off, that between the maximum high-pressure turbine (HPT) entrance temperature and minimum HPC SM, showed even better results: the maximum HPT temperature was estimated within 0.76%. Because of the accuracy in this estimation, this suggests another limit that may be taken into consideration during design and analysis. It also demonstrates the extension of the characterization to other attributes that contribute to the performance or operability of the engine. Metrics are proposed that, together, provide information on the shape of the trade-off between response time and minimum HPC SM, and how much each varies throughout the life cycle, at the limiting design points. These metrics also facilitate comparison of the expected transient behavior for multiple engine models.
Si, Yuan; Li, Xiang; Yin, Dongqin; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian
2018-01-01
The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO's capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model's accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000-2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did not collect all of the relevant data during the study period), the energy production is estimated as 267.7, 357.5, and 358.3×108 KWh for the Qinghai Power Grid during dry, normal, and wet years, respectively. 2) Assuming that the hydropower system is operated jointly, the firm output can reach 3110 MW (reliability of 100%) and 3510 MW (reliability of 90%). Moreover, a decrease in energy production from the Longyangxia Reservoir can bring about a very large increase in firm output from the hydropower system. 3) The maximum energy production can reach 297.7, 363.9, and 411.4×108 KWh during dry, normal, and wet years, respectively. The trade-off curve between maximum energy production and firm output is also provided for reference.
Si, Yuan; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian
2018-01-01
The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO’s capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model’s accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000–2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did not collect all of the relevant data during the study period), the energy production is estimated as 267.7, 357.5, and 358.3×108 KWh for the Qinghai Power Grid during dry, normal, and wet years, respectively. 2) Assuming that the hydropower system is operated jointly, the firm output can reach 3110 MW (reliability of 100%) and 3510 MW (reliability of 90%). Moreover, a decrease in energy production from the Longyangxia Reservoir can bring about a very large increase in firm output from the hydropower system. 3) The maximum energy production can reach 297.7, 363.9, and 411.4×108 KWh during dry, normal, and wet years, respectively. The trade-off curve between maximum energy production and firm output is also provided for reference. PMID:29370206
The Sonic Altimeter for Aircraft
NASA Technical Reports Server (NTRS)
Draper, C S
1937-01-01
Discussed here are results already achieved with sonic altimeters in light of the theoretical possibilities of such instruments. From the information gained in this investigation, a procedure is outlined to determine whether or not a further development program is justified by the value of the sonic altimeter as an aircraft instrument. The information available in the literature is reviewed and condensed into a summary of sonic altimeter developments. Various methods of receiving the echo and timing the interval between the signal and the echo are considered. A theoretical discussion is given of sonic altimeter errors due to uncertainties in timing, variations in sound velocity, aircraft speed, location of the sending and receiving units, and inclinations of the flight path with respect to the ground surface. Plots are included which summarize the results in each case. An analysis is given of the effect of an inclined flight path on the frequency of the echo. A brief study of the acoustical phases of the sonic altimeter problem is carried through. The results of this analysis are used to predict approximately the maximum operating altitudes of a reasonably designed sonic altimeter under very good and very bad conditions. A final comparison is made between the estimated and experimental maximum operating altitudes which shows good agreement where quantitative information is available.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
What controls the maximum magnitude of injection-induced earthquakes?
NASA Astrophysics Data System (ADS)
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Loper, Connie A.; Crawford, J. Kent; Otto, Kim L.; Manning, Rhonda L.; Meyer, Michael T.; Furlong, Edward T.
2007-01-01
This report presents environmental and quality-control data from analyses of 15 pharmaceutical and 31 antibiotic compounds in water samples from streams and wells in south-central Pennsylvania. The analyses are part of a study by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Environmental Protection (PADEP) to define concentrations of selected emerging contaminants in streams and well water in Pennsylvania. Sampling was conducted at 11 stream sites and at 6 wells in 9 counties of south-central Pennsylvania. Five of the streams received municipal wastewater and 6 of the streams received runoff from agricultural areas dominated by animal-feeding operations. For all 11 streams, samples were collected at locations upstream and downstream of the municipal effluents or animal-feeding operations. All six wells were in agricultural settings. A total of 120 environmental samples and 21 quality-control samples were analyzed for the study. Samples were collected at each site in March/April, May, July, and September 2006 to obtain information on changes in concentration that could be related to seasonal use of compounds.For streams, 13 pharmaceuticals and 11 antibiotics were detected at least 1 time. Detections included analytical results that were estimated or above the minimum reporting limits. Seventy-eight percent of all detections were analyzed in samples collected downstream from municipal-wastewater effluents. For streams receiving wastewater effluents, the pharmaceuticals caffeine and para-xanthine (a degradation product of caffeine) had the greatest concentrations, 4.75 μg/L (micrograms per liter) and 0.853 μg/L, respectively. Other pharmaceuticals and their respective maximum concentrations were carbamazepine (0.516 μg/L) and ibuprofen (0.277 μg/L). For streams receiving wastewater effluents, the antibiotic azithromycin had the greatest concentration (1.65 μg/L), followed by sulfamethoxazole (1.34 μg/L), ofloxacin (0.329 μg/L), and trimethoprim (0.256 μg/L).For streams receiving runoff from animal-feeding operations, the only pharmaceuticals detected were acetaminophen, caffeine, cotinine, diphenhydramine, and carbamazepine. The maximum concentration for pharmaceuticals was 0.053 μg/L. Three streams receiving runoff from animal-feeding operations had detections of one or more antibiotic compound--oxytetracycline, sulfadimethoxine, sulfamethoxazole, and tylosin. The maximum concentration for antibiotics was 0.157 μg/L. The average number of compounds (pharmaceuticals and antibiotics) detected in sites downstream from animal-feeding operations was three. The average number of compounds detected downstream from municipal-wastewater effluents was 13.For wells used to supply livestock, four compounds were detected--two pharmaceuticals (cotinine and diphenhydramine) and two antibiotics (tylosin and sulfamethoxazole). There were five detections in all the well samples. The maximum concentration detected in well water was for cotinine, estimated to be 0.024 μg/L.Seasonal occurrence of pharmaceutical and antibiotic compounds in stream water varied by compound and site type. At four stream sites, the same compounds were detected in all four seasonal samples. At other sites, pharmaceutical or antibiotic compounds were detected only one time in seasonal samples. Winter samples collected in streams receiving municipalwastewater effluent had the greatest number of compounds detected (21). Research analytical methods were used to determine concentrations for pharmaceuticals and antibiotics. To assist in evaluating the quality of the analyses, detailed information is presented on laboratory methodology and results from qualitycontrol samples. Quality-control data include results for nine blanks, nine duplicate environmental sample pairs, and three laboratory-spiked environmental samples as well as the recoveries of compounds in laboratory surrogates and laboratory reagent spikes.
A spatial method to calculate small-scale fisheries effort in data poor scenarios.
Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio
2017-01-01
To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.
Al-Baldawi, Israa Abdul Wahab; Sheikh Abdullah, Siti Rozaimah; Abu Hasan, Hassimi; Suja, Fatihah; Anuar, Nurina; Mushrifah, Idris
2014-07-01
This study investigated the optimum conditions for total petroleum hydrocarbon (TPH) removal from diesel-contaminated water using phytoremediation treatment with Scirpus grossus. In addition, TPH removal from sand was adopted as a second response. The optimum conditions for maximum TPH removal were determined through a Box-Behnken Design. Three operational variables, i.e. diesel concentration (0.1, 0.175, 0.25% Vdiesel/Vwater), aeration rate (0, 1 and 2 L/min) and retention time (14, 43 and 72 days), were investigated by setting TPH removal and diesel concentration as the maximum, retention time within the given range, and aeration rate as the minimum. The optimum conditions were found to be a diesel concentration of 0.25% (Vdiesel/Vwater), a retention time of 63 days and no aeration with an estimated maximum TPH removal from water and sand of 76.3 and 56.5%, respectively. From a validation test of the optimum conditions, it was found that the maximum TPH removal from contaminated water and sand was 72.5 and 59%, respectively, which was a 5 and 4.4% deviation from the values given by the Box-Behnken Design, providing evidence that S. grossus is a Malaysian native plant that can be used to remediate wastewater containing hydrocarbons. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Kai; Liu, Ruo-Yu; Dai, Zi-Gao; Asano, Katsuaki
2018-04-01
The high-energy (>100 MeV) emission observed by the Fermi Large Area Telescope during the prompt phase of some luminous gamma-ray bursts (GRBs) could arise from the cascade induced by interactions between accelerated protons and the radiation field of GRBs. The photomeson process, which is usually suggested to operate in such a hadronic explanation, requires a rather high proton energy (>1017 eV) for an efficient interaction. However, whether GRBs can accelerate protons to such a high energy is far from guaranteed, although they have been suggested as the candidate source for ultrahigh-energy cosmic rays. In this work, we revisit the hadronic model for the prompt high-energy emission of GRBs with a smaller maximum proton energy than the usually adopted value estimated from the Bohm condition. In this case, the Bethe–Heitler pair production process becomes comparably important or even dominates over the photomeson process. We show that with a relatively low maximum proton energy with a Lorentz factor of 105 in the comoving frame, the cascade emission can still reproduce various types of high-energy spectra of GRBs. For most GRBs without high-energy emission detected, the maximum proton energy could be even lower and relax the constraints on the parameters of the GRB jet resulting from the nondetection of GRB neutrinos by IceCube.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
NASA Astrophysics Data System (ADS)
Zhang, Enren; Wang, Feng; Yu, Qingling; Scott, Keith; Wang, Xu; Diao, Guowang
2017-08-01
The performance of activated carbon catalyst in air-cathodes in microbial fuel cells was investigated over one year. A maximum power of 1722 mW m-2 was produced within the initial one-month microbial fuel cell operation. The air-cathodes produced a maximum power >1200 mW m-2 within six months, but gradually became a limiting factor for the power output in prolonged microbial fuel cell operation. The maximum power decreased by 55% when microbial fuel cells were operated over one year due to deterioration in activated carbon air-cathodes. While salt/biofilm removal from cathodes experiencing one-year operation increased a limiting performance enhancement in cathodes, a washing-drying-pressing procedure could restore the cathode performance to its original levels, although the performance restoration was temporary. Durable cathodes could be regenerated by re-pressing activated carbon catalyst, recovered from one year deteriorated air-cathodes, with new gas diffusion layer, resulting in ∼1800 mW m-2 of maximum power production. The present study indicated that activated carbon was an effective catalyst in microbial fuel cell cathodes, and could be recovered for reuse in long-term operated microbial fuel cells by simple methods.
Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model
NASA Astrophysics Data System (ADS)
Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel
2011-03-01
This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.
Application of the quantum spin glass theory to image restoration.
Inoue, J I
2001-04-01
Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
Adaptive Offset Correction for Intracortical Brain Computer Interfaces
Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.
2014-01-01
Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868
Adaptive offset correction for intracortical brain-computer interfaces.
Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R
2014-03-01
Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.
Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector
NASA Astrophysics Data System (ADS)
Isoli, Niccolo; Chaczykowski, Maciej
2017-11-01
The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
NASA Technical Reports Server (NTRS)
Hertel, Heinrich
1930-01-01
This report is intended to furnish bases for load assumptions in the designing of airplane controls. The maximum control forces and quickness of operation are determined. The maximum forces for a strong pilot with normal arrangement of the controls is taken as 1.25 times the mean value obtained from tests with twelve persons. Tests with a number of persons were expected to show the maximum forces that a man of average strength can exert on the control stick in operating the elevator and ailerons and also on the rudder bar. The effect of fatigue, of duration and of the nature (static or dynamic) of the force, as also the condition of the test subject (with or without belt) were also considered.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung
2015-04-01
In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting
NASA Astrophysics Data System (ADS)
Contreras Ruiz Esparza, M. G., Sr.; Jimenez Velazquez, J. C., Sr.; Valdes Gonzalez, C. M., Sr.; Reyes Pimentel, T. A.; Galaviz Alonso, S. A.
2014-12-01
Popocatepetl, the smoking mountain, is a stratovolcano located in central Mexico with an elevation of 5450 masl. The active volcano, close to some of the largest urban centers in Mexico - 60 km and 30 km far from Mexico City and Puebla, respectively - poses a high hazard to an estimated population of 500 thousand people living in the vicinity of the edifice. Accordingly, in July 1994 the Popocatepetl Volcanological Observatory (POVO) was established. The observatory is operated and supported by the National Center for Disaster Prevention of Mexico (CENAPRED), and is equipped to fully monitor different aspects of the volcanic activity. Among the instruments deployed, we use in this investigation two tiltmometers and broad-band seismometers at two sites (Chipiquixtle and Encinos), which send the information gathered continuously to Mexico City.In this research, we study the characteristics of the tiltmeters signals minutes after the occurrence of certain earthquakes. The Popocatepetl volcano starts inflation-deflation cycles due to the ground motion generated by events located at certain regions. We present the analysis of the tiltmeters and seismic signals of all the earthquakes (Mw>5) occurred from January 2013 to June 2014, recorded at Chipiquixtle and Encinos stations. First, we measured the maximum tilt variation after each earthquake. Next, we apply a band-pass filter for different frequency ranges to the seismic signals of the two seismic stations, and estimated the total energy of the strong motion phase of the seismic record. Finally, we compared both measurements and observed that the maximum tilt variations were occurring when the maximum total energy of the seismic signals were in a specific frequency range. We also observed that the earthquake records that have the maximum total energy in that frequency range were the ones with a epicentral location south-east of the volcano. We conclude that our observations can be used set the ground for an early warning sytem of the Popocatepetl volcano.
Gabriel, Erin E; Gilbert, Peter B
2014-04-01
Principal surrogate (PS) endpoints are relatively inexpensive and easy to measure study outcomes that can be used to reliably predict treatment effects on clinical endpoints of interest. Few statistical methods for assessing the validity of potential PSs utilize time-to-event clinical endpoint information and to our knowledge none allow for the characterization of time-varying treatment effects. We introduce the time-dependent and surrogate-dependent treatment efficacy curve, ${\\mathrm {TE}}(t|s)$, and a new augmented trial design for assessing the quality of a biomarker as a PS. We propose a novel Weibull model and an estimated maximum likelihood method for estimation of the ${\\mathrm {TE}}(t|s)$ curve. We describe the operating characteristics of our methods via simulations. We analyze data from the Diabetes Control and Complications Trial, in which we find evidence of a biomarker with value as a PS.
NASA Astrophysics Data System (ADS)
Ariffin, Syaiba Balqish; Midi, Habshah
2014-06-01
This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.
Accelerated life tests of specimen heat pipe from Communication Technology Satellite (CTS) project
NASA Technical Reports Server (NTRS)
Tower, L. K.; Kaufman, W. B.
1977-01-01
A gas-loaded variable conductance heat pipe of stainless steel with methanol working fluid identical to one now on the CTS satellite was life tested in the laboratory at accelerated conditions for 14 200 hours, equivalent to about 70 000 hours at flight conditions. The noncondensible gas inventory increased about 20 percent over the original charge. The observed gas increase is estimated to increase operating temperature by about 2.2 C, insufficient to harm the electronic gear cooled by the heat pipes in the satellite. Tests of maximum heat input against evaporator elevation agree well with the manufacturer's predictions.
Preliminary meteorological results on Mars from the viking 1 lander.
Hess, S L; Henry, R M; Leovy, C B; Ryan, J A; Tillman, J E; Chamberlain, T E; Cole, H L; Dutton, R G; Greene, G C; Simon, W E; Mitchell, J L
1976-08-27
The results from the meteorology instruments on the Viking 1 lander are presented for the first 4 sols of operation. The instruments are working satisfactorily. Temperatures fluctuated from a low of 188 degrees K to an estimated maximum of 244 degrees K. The mean pressure is 7.65 millibars with a diurnal variation of amplitude 0.1 millibar. Wind speeds averaged over several minutes have ranged from essentially calm to 9 meters per second. Wind directions have exhibited a remarkable regularity which may be associated with nocturnal downslope winds and gravitational oscillations, or to tidal effects of the diurnal pressure wave, or to both.
Preliminary meteorological results on Mars from the Viking 1 lander
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, S.L.; Henry, R.M.; Leovy, C.B.
1976-08-27
The results from the meteorology instruments on the Viking 1 lander are presented for the first 4 sols of operation. The instruments are working satisfactorily. Temperatures fluctuated from a low of 188/sup 0/K to an estimated maximum of 244/sup 0/K. The mean pressure is 7.65 millibars with a diurnal variation of amplitude 0.1 millibar. Wind speeds averaged over several minutes have ranged from essentially calm to 9 meters per second. Wind directions have exhibited a remarkable regularity which may be associated with nocturnal downslope winds and gravitational oscillations, or to tidal effects of the diurnal pressure wave, or to both.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Measuring neutron-star properties via gravitational waves from neutron-star mergers.
Bauswein, A; Janka, H-T
2012-01-06
We demonstrate by a large set of merger simulations for symmetric binary neutron stars (NSs) that there is a tight correlation between the frequency peak of the postmerger gravitational-wave (GW) emission and the physical properties of the nuclear equation of state (EoS), e.g., expressed by the radius of the maximum-mass Tolman-Oppenheimer-Volkhoff configuration. Therefore, a single measurement of the peak frequency of the postmerger GW signal will constrain the NS EoS significantly. For optimistic merger-rate estimates a corresponding detection with Advanced LIGO is expected to happen within an operation time of roughly a year.
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
An entropy-based method for determining the flow depth distribution in natural channels
NASA Astrophysics Data System (ADS)
Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.
2013-08-01
A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.
2018-01-01
The heat exchange properties of aircrew clothing including a Constant Wear Immersion Suit (CWIS), and the environmental conditions in which heat strain would impair operational performance, were investigated. The maximum evaporative potential (im/clo) of six clothing ensembles (three with a flight suit (FLY) and three with a CWIS) of varying undergarment layers were measured with a heated sweating manikin. Biophysical modelling estimated the environmental conditions in which body core temperature would elevate above 38.0°C during routine flight. The im/clo was reduced with additional undergarment layers, and was more restricted in CWIS compared to FLY ensembles. A significant linear relationship (r2 = 0.98, P<0.001) was observed between im/clo and the highest wet-bulb globe temperature in which the flight scenario could be completed without body core temperature exceeding 38.0°C. These findings provide a valuable tool for clothing manufacturers and mission planners for the development and selection of CWIS’s for aircrew. PMID:29723267
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Defense Science Board Summer Study on Autonomy
2016-06-01
hours, at a maximum velocity of 40 mph, with a maximum payload of 9 kg (20 lbs); a maximum range of 160 km (100 miles); and can operate in wind /gust...existing mine disposal platform, such as Seafox, with contact reacquisition and neutralization capability. Seafox is a wire -guided mine neutralizer...functions, will retain operator control of neutralization and will remove the need for personnel to enter the minefield to execute fly- by- wire
Huang, Jr-Chuan; Lee, Tsung-Yu; Teng, Tse-Yang; Chen, Yi-Chin; Huang, Cho-Ying; Lee, Cheing-Tung
2014-01-01
The exponent decay in landslide frequency-area distribution is widely used for assessing the consequences of landslides and with some studies arguing that the slope of the exponent decay is universal and independent of mechanisms and environmental settings. However, the documented exponent slopes are diverse and hence data processing is hypothesized for this inconsistency. An elaborated statistical experiment and two actual landslide inventories were used here to demonstrate the influences of the data processing on the determination of the exponent. Seven categories with different landslide numbers were generated from the predefined inverse-gamma distribution and then analyzed by three data processing procedures (logarithmic binning, LB, normalized logarithmic binning, NLB and cumulative distribution function, CDF). Five different bin widths were also considered while applying LB and NLB. Following that, the maximum likelihood estimation was used to estimate the exponent slopes. The results showed that the exponents estimated by CDF were unbiased while LB and NLB performed poorly. Two binning-based methods led to considerable biases that increased with the increase of landslide number and bin width. The standard deviations of the estimated exponents were dependent not just on the landslide number but also on binning method and bin width. Both extremely few and plentiful landslide numbers reduced the confidence of the estimated exponents, which could be attributed to limited landslide numbers and considerable operational bias, respectively. The diverse documented exponents in literature should therefore be adjusted accordingly. Our study strongly suggests that the considerable bias due to data processing and the data quality should be constrained in order to advance the understanding of landslide processes. PMID:24852019
Cerri, M O; Badino, A C
2012-08-01
In biochemical processes involving filamentous microorganisms, the high shear rate may damage suspended cells leading to viability loss and cell disruption. In this work, the influence of the shear conditions in clavulanic acid (CA) production by Streptomyces clavuligerus was evaluated in a 4-dm(3) conventional stirred tank (STB) and in 6-dm(3) concentric-tube airlift (ALB) bioreactors. Batch cultivations were performed in a STB at 600 and 800 rpm and 0.5 vvm (cultivations B1 and B2) and in ALB at 3.0 and 4.1 vvm (cultivations A1 and A2) to define two initial oxygen transfer conditions in both bioreactors. The average shear rate ([Formula: see text]) of the cultivations was estimated using correlations of recent literature based on experimental data of rheological properties of the broth (consistency index, K, and flow index, n) and operating conditions, impeller speed (N) for STB and superficial gas velocity in the riser (UGR) for ALB. In the same oxygen transfer condition, the [Formula: see text] values for ALB were higher than those obtained in STB. The maximum [Formula: see text] presented a strong correlation with a maximum consistency index (K (max)) of the broth. Close values of maximum CA production were obtained in cultivations A1 and A2 (454 and 442 mg L(-1)) with similar maximum [Formula: see text] values of 4,247 and 4,225 s(-1). In cultivations B1 and B2, the maximum CA production of 269 and 402 mg L(-1) were reached with a maximum [Formula: see text] of 904 and 1,786 s(-1). The results show that high values of average shear rate increase the CA production regardless of the oxygen transfer condition and bioreactor model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Retention Severity in the Navy: A Composite Index.
1983-06-01
unfortunately, their estimates of optimum SRB award levels are applicable only to recruits with four year obligations ( 4YO ) and six year obligatiorn(6YO). A...of a maximum bonus award level of 6. Their estimates would put the maximum bonus level as high as 20 for 4YOs and 19 for 6YOs. However, the implica
78 FR 20109 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
...Meeting (i.e., webinar) training session conducted by CDC staff. We estimate the burden of this training to be a maximum of 2 hours. Respondents will only have to take this training one time. Assuming a maximum number of outbreaks of 1,400, the estimated burden for this training is 2,800 hours. The total...
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Modeling an exhumed basin: A method for estimating eroded overburden
Poelchau, H.S.
2001-01-01
The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.
F-8C adaptive flight control extensions. [for maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Stein, G.; Hartmann, G. L.
1977-01-01
An adaptive concept which combines gain-scheduled control laws with explicit maximum likelihood estimation (MLE) identification to provide the scheduling values is described. The MLE algorithm was improved by incorporating attitude data, estimating gust statistics for setting filter gains, and improving parameter tracking during changing flight conditions. A lateral MLE algorithm was designed to improve true air speed and angle of attack estimates during lateral maneuvers. Relationships between the pitch axis sensors inherent in the MLE design were examined and used for sensor failure detection. Design details and simulation performance are presented for each of the three areas investigated.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
14 CFR 23.1583 - Operating limitations.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) The maximum zero wing fuel weight, where relevant, as established in accordance with § 23.343. (d... passenger seating configuration. The maximum passenger seating configuration. (k) Allowable lateral fuel loading. The maximum allowable lateral fuel loading differential, if less than the maximum possible. (l...
14 CFR 23.1524 - Maximum passenger seating configuration.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23.1524 Section 23.1524 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum...
An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.
1981-06-29
A.D-AI14# D" TENNEssEE UNIV KNOXVILLI DEPT OF PSYCHOLOGY F/6 12/1 AN ALTERNATIVE STIMATOR-FOR THE MAXIMUM LIKELIHOO ESTIMATOR F--ETCCU) JUN &I F...EXTREME RESPONSE PATTERNS FUMIKO SAMEJIMAr DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF TENNESSEE KNOXVILLE, TENN. 37916 JUNE, 1981 Prepared under the...contract number N00014-77-C-360, NRl 1蓺 with the Personnel and Training Research Programs Psychological Sciences Division Office of Naval Research
Maximum a posteriori decoder for digital communications
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Seeking maximum linearity of transfer functions
NASA Astrophysics Data System (ADS)
Silva, Filipi N.; Comin, Cesar H.; Costa, Luciano da F.
2016-12-01
Linearity is an important and frequently sought property in electronics and instrumentation. Here, we report a method capable of, given a transfer function (theoretical or derived from some real system), identifying the respective most linear region of operation with a fixed width. This methodology, which is based on least squares regression and systematic consideration of all possible regions, has been illustrated with respect to both an analytical (sigmoid transfer function) and a simple situation involving experimental data of a low-power, one-stage class A transistor current amplifier. Such an approach, which has been addressed in terms of transfer functions derived from experimentally obtained characteristic surface, also yielded contributions such as the estimation of local constants of the device, as opposed to typically considered average values. The reported method and results pave the way to several further applications in other types of devices and systems, intelligent control operation, and other areas such as identifying regions of power law behavior.
Report of the committee on a commercially developed space facility
NASA Technical Reports Server (NTRS)
Shea, Joseph F.; Stever, H. Guyford; Cutter, W. Bowman, III; Demisch, Wolfgang H.; Fink, Daniel J.; Flax, Alexander H.; Gatos, Harry C.; Glicksman, Martin E.; Lanzerotti, Louis J.; Logsdon, John M., III
1989-01-01
Major facilities that could support significant microgravity research and applications activity are discussed. The ground-based facilities include drop towers, aircraft flying parabolic trajectories, and sounding rockets. Facilities that are intrinsically tied to the Space Shuttle range from Get-Away-Special canisters to Spacelab long modules. There are also orbital facilities which include recoverable capsules launched on expendable launch vehicles, free-flying spacecraft, and space stations. Some of these existing, planned, and proposed facilities are non-U.S. in origin, but potentially available to U.S. investigators. In addition, some are governmentally developed and operated whereas others are planned to be privately developed and/or operated. Tables are provided to show the facility, developer, duration, estimated gravity level, crew interaction, flight frequency, year available, power to payload, payload volume, and maximum payload mass. The potential of direct and indirect benefits of manufacturing in space are presented.
Grey-box modelling of aeration tank settling.
Bechman, Henrik; Nielsen, Marinus K; Poulsen, Niels Kjølstad; Madsen, Henrik
2002-04-01
A model of the concentrations of suspended solids (SS) in the aeration tanks and in the effluent from these during Aeration tank settling (ATS) operation is established. The model is based on simple SS mass balances, a model of the sludge settling and a simple model of how the SS concentration in the effluent from the aeration tanks depends on the actual concentrations in the tanks and the sludge blanket depth. The model is formulated in continuous time by means of stochastic differential equations with discrete-time observations. The parameters of the model are estimated using a maximum likelihood method from data from an alternating BioDenipho waste water treatment plant (WWTP). The model is an important tool for analyzing ATS operation and for selecting the appropriate control actions during ATS, as the model can be used to predict the SS amounts in the aeration tanks as well as in the effluent from the aeration tanks.
Single and Multi-Pulse Low-Energy Conical Theta Pinch Inductive Pulsed Plasma Thruster Performance
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Martin, Adam; Polzin, Kurt; Kimberlin, Adam; Eskridge, Richard
2013-01-01
Fabricated and tested CTP IPPTs at cone angles of 20deg, 38deg, and 60deg, and performed direct single-pulse impulse bit measurements with continuous gas flow. Single pulse performance highest for 38deg angle with impulse bit of approx.1 mN-s for both argon and xenon. Estimated efficiencies low, but not unexpectedly so based on historical data trends and the direction of the force vector in the CTP. Capacitor charging system assembled to provide rapid recharging of capacitor bank, permitting repetition-rate operation. IPPT operated at repetition-rate of 5 Hz, at maximum average power of 2.5 kW, representing to our knowledge the highest average power for a repetitively-pulsed thruster. Average thrust in repetition-rate mode (at 5 kV, 75 sccm argon) was greater than simply multiplying the single-pulse impulse bit and the repetition rate.
Hospital disaster response using business impact analysis.
Suginaka, Hiroshima; Okamoto, Ken; Hirano, Yohei; Fukumot, Yuichi; Morikawa, Miki; Oode, Yasumasa; Sumi, Yuka; Inoue, Yoshiaki; Matsuda, Shigeru; Tanaka, Hiroshi
2014-12-01
The catastrophic Great East Japan Earthquake in 2011 created a crisis in a university-affiliated hospital by disrupting the water supply for 10 days. In response, this study was conducted to analyze water use and prioritize water consumption in each department of the hospital by applying a business impact analysis (BIA). Identifying the minimum amount of water necessary for continuing operations during a disaster was an additional goal. Water is essential for many hospital operations and disaster-ready policies must be in place for the safety and continued care of patients. A team of doctors, nurses, and office workers in the hospital devised a BIA questionnaire to examine all operations using water. The questionnaire included department name, operation name, suggested substitutes for water, and the estimated daily amount of water consumption. Operations were placed in one of three ranks (S, A, or B) depending on the impact on patients and the need for operational continuity. Recovery time objective (RTO), which is equivalent to the maximum tolerable period of disruption, was determined. Furthermore, the actual use of water and the efficiency of substitute methods, practiced during the water-disrupted periods, were verified in each operation. There were 24 activities using water in eight departments, and the estimated water consumption in the hospital was 326 (SD = 17) m³ per day: 64 (SD = 3) m³ for S (20%), 167 (SD = 8) m³ for A (51%), and 95 (SD = 5) m³ for B operations (29%). During the disruption, the hospital had about 520 m³ of available water. When the RTO was set to four days, the amount of water available would have been 130 m³ per day. During the crisis, 81% of the substitute methods were used for the S and A operations. This is the first study to identify and prioritize hospital operations necessary for the efficient continuation of medical treatment during suspension of the water supply by applying a BIA. Understanding the priority of operations and the minimum daily water requirement for each operation is important for a hospital in the event of an unexpected adverse situation, such as a major disaster.
The Genetic Interpretation of Area under the ROC Curve in Genomic Profiling
Wray, Naomi R.; Yang, Jian; Goddard, Michael E.; Visscher, Peter M.
2010-01-01
Genome-wide association studies in human populations have facilitated the creation of genomic profiles which combine the effects of many associated genetic variants to predict risk of disease. The area under the receiver operator characteristic (ROC) curve is a well established measure for determining the efficacy of tests in correctly classifying diseased and non-diseased individuals. We use quantitative genetics theory to provide insight into the genetic interpretation of the area under the ROC curve (AUC) when the test classifier is a predictor of genetic risk. Even when the proportion of genetic variance explained by the test is 100%, there is a maximum value for AUC that depends on the genetic epidemiology of the disease, i.e. either the sibling recurrence risk or heritability and disease prevalence. We derive an equation relating maximum AUC to heritability and disease prevalence. The expression can be reversed to calculate the proportion of genetic variance explained given AUC, disease prevalence, and heritability. We use published estimates of disease prevalence and sibling recurrence risk for 17 complex genetic diseases to calculate the proportion of genetic variance that a test must explain to achieve AUC = 0.75; this varied from 0.10 to 0.74. We provide a genetic interpretation of AUC for use with predictors of genetic risk based on genomic profiles. We provide a strategy to estimate proportion of genetic variance explained on the liability scale from estimates of AUC, disease prevalence, and heritability (or sibling recurrence risk) available as an online calculator. PMID:20195508
Mesospheric temperatures estimated from the meteor radar observations at Mohe, China
NASA Astrophysics Data System (ADS)
Liu, Libo; Liu, Huixin; Le, Huijun; Chen, Yiding; Sun, Yang-Yi; Ning, Baiqi; Hu, Lianhuan; Wan, Weixing; Li, Na; Xiong, Jiangang
2017-02-01
In this work, we report the estimation of mesospheric temperatures at 90 km height from the observations of the VHF all-sky meteor radar operated at Mohe (53.5°N, 122.3°E), China, since August 2011. The kinetic temperature profiles retrieved from the observations of Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) on board the Thermosphere, Ionosphere, Mesosphere, Energetics, and Dynamics satellite are processed to provide the temperature (TSABER) and temperature gradient (dT/dh) at 90 km height. Based on the SABER temperature profile data an empirical dT/dh model is developed for the Mohe latitude. First, we derive the temperatures from the meteor decay times (Tmeteor) and the Mohe dT/dh model gives prior information of temperature gradients. Second, the full width at half maximum (FWHM) of the meteor height profiles is calculated and further used to deduce the temperatures (TFWHM) based on the strong linear relationship between FWHM and TSABER. The temperatures at 90 km deduced from the decay times (Tmeteor) and from the meteor height distributions (TFWHM) at Mohe are validated/calibrated with TSABER. The temperatures present a considerable annual variation, being maximum in winter and minimum in summer. Harmonic analyses reveal that the temperatures have an annual variation consistent with TSABER. Our work suggests that FWHM has a good performance in routine estimation of the temperatures. It should be pointed out that the slope of FWHM as a function of TSABER is 10.1 at Mohe, which is different from that of 15.71 at King Sejong (62.2°S, 58.8°E) station.
Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane
2016-05-01
Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Thermodynamic analysis of steam-injected advanced gas turbine cycles
NASA Astrophysics Data System (ADS)
Pandey, Devendra; Bade, Mukund H.
2017-12-01
This paper deals with thermodynamic analysis of steam-injected gas turbine (STIGT) cycle. To analyse the thermodynamic performance of steam-injected gas turbine (STIGT) cycles, a methodology based on pinch analysis is proposed. This graphical methodology is a systematic approach proposed for a selection of gas turbine with steam injection. The developed graphs are useful for selection of steam-injected gas turbine (STIGT) for optimal operation of it and helps designer to take appropriate decision. The selection of steam-injected gas turbine (STIGT) cycle can be done either at minimum steam ratio (ratio of mass flow rate of steam to air) with maximum efficiency or at maximum steam ratio with maximum net work conditions based on the objective of plants designer. Operating the steam injection based advanced gas turbine plant at minimum steam ratio improves efficiency, resulting in reduction of pollution caused by the emission of flue gases. On the other hand, operating plant at maximum steam ratio can result in maximum work output and hence higher available power.
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.
Besio, W; Aakula, R; Dai, W
2004-01-01
Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.
40 CFR 464.25 - Pretreatment standards for existing sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) EFFLUENT GUIDELINES AND STANDARDS METAL MOLDING AND CASTING POINT SOURCE CATEGORY Copper Casting... existing sources. (a) Casting Quench Operations. PSES Pollutant or pollutant property Maximum for any 1 day... monitoring) 1.2 0.399 (b) Direct Chill Casting Operations. PSES Pollutant or pollutant property Maximum for...
29 CFR 1918.85 - Containerized cargo operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Containerized cargo operations. (a) Container markings. Every intermodal container shall be legibly and permanently marked with: (1) The weight of the container when empty, in pounds; (2) The maximum cargo weight... maximum cargo weight, in pounds. (b) Container weight. No container shall be hoisted by any lifting...
29 CFR 1918.85 - Containerized cargo operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Containerized cargo operations. (a) Container markings. Every intermodal container shall be legibly and permanently marked with: (1) The weight of the container when empty, in pounds; (2) The maximum cargo weight... maximum cargo weight, in pounds. (b) Container weight. No container shall be hoisted by any lifting...
29 CFR 1918.85 - Containerized cargo operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Containerized cargo operations. (a) Container markings. Every intermodal container shall be legibly and permanently marked with: (1) The weight of the container when empty, in pounds; (2) The maximum cargo weight... maximum cargo weight, in pounds. (b) Container weight. No container shall be hoisted by any lifting...
29 CFR 1918.85 - Containerized cargo operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Containerized cargo operations. (a) Container markings. Every intermodal container shall be legibly and permanently marked with: (1) The weight of the container when empty, in pounds; (2) The maximum cargo weight... maximum cargo weight, in pounds. (b) Container weight. No container shall be hoisted by any lifting...
29 CFR 1918.85 - Containerized cargo operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Containerized cargo operations. (a) Container markings. Every intermodal container shall be legibly and permanently marked with: (1) The weight of the container when empty, in pounds; (2) The maximum cargo weight... maximum cargo weight, in pounds. (b) Container weight. No container shall be hoisted by any lifting...
30 CFR 56.19062 - Maximum acceleration and deceleration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum acceleration and deceleration. 56.19062 Section 56.19062 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19062 Maximum acceleration and deceleration. Maximum normal operating...
30 CFR 57.19062 - Maximum acceleration and deceleration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum acceleration and deceleration. 57.19062 Section 57.19062 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19062 Maximum acceleration and deceleration. Maximum normal operating...
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
NASA Astrophysics Data System (ADS)
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
NASA Astrophysics Data System (ADS)
Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai
2018-02-01
An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J. V.
Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate.more » In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.« less
Methodology and Implications of Maximum Paleodischarge Estimates for
Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.
Ries, Kernell G.
1999-01-01
A network of 148 low-flow partial-record stations was operated on streams in Massachusetts during the summers of 1989 through 1996. Streamflow measurements (including historical measurements), measured basin characteristics, and estimated streamflow statistics are provided in the report for each low-flow partial-record station. Also included for each station are location information, streamflow-gaging stations for which flows were correlated to those at the low-flowpartial-record station, years of operation, and remarks indicating human influences of stream-flowsat the station. Three or four streamflow measurements were made each year for three years during times of low flow to obtain nine or ten measurements for each station. Measured flows at the low-flow partial-record stations were correlated with same-day mean flows at a nearby gaging station to estimate streamflow statistics for the low-flow partial-record stations. The estimated streamflow statistics include the 99-, 98-, 97-, 95-, 93-, 90-, 85-, 80-, 75-, 70-, 65-, 60-, 55-, and 50-percent duration flows; the 7-day, 10- and 2-year low flows; and the August median flow. Characteristics of the drainage basins for the stations that theoretically relate to the response of the station to climatic variations were measured from digital map data by use of an automated geographic information system procedure. Basin characteristics measured include drainage area; total stream length; mean basin slope; area of surficial stratified drift; area of wetlands; area of water bodies; and mean, maximum, and minimum basin elevation.Station descriptions and calculated streamflow statistics are also included in the report for the 50 continuous gaging stations used in correlations with the low-flow partial-record stations.
Psychometric Properties of IRT Proficiency Estimates
ERIC Educational Resources Information Center
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.
2011-01-01
Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.
NASA Technical Reports Server (NTRS)
Soulas, George C.
2013-01-01
A study was conducted to quantify the impact of back-sputtered carbon on the downstream accelerator grid erosion rates of the NEXT (NASA's Evolutionary Xenon Thruster) Long Duration Test (LDT1). A similar analysis that was conducted for the NSTAR (NASA's Solar Electric Propulsion Technology Applications Readiness Program) Life Demonstration Test (LDT2) was used as a foundation for the analysis developed herein. A new carbon surface coverage model was developed that accounted for multiple carbon adlayers before complete surface coverage is achieved. The resulting model requires knowledge of more model inputs, so they were conservatively estimated using the results of past thin film sputtering studies and particle reflection predictions. In addition, accelerator current densities across the grid were rigorously determined using an ion optics code to determine accelerator current distributions and an algorithm to determine beam current densities along a grid using downstream measurements. The improved analysis was applied to the NSTAR test results for evaluation. The improved analysis demonstrated that the impact of back-sputtered carbon on pit and groove wear rate for the NSTAR LDT2 was negligible throughout most of eroded grid radius. The improved analysis also predicted the accelerator current density for transition from net erosion to net deposition considerably more accurately than the original analysis. The improved analysis was used to estimate the impact of back-sputtered carbon on the accelerator grid pit and groove wear rate of the NEXT Long Duration Test (LDT1). Unlike the NSTAR analysis, the NEXT analysis was more challenging because the thruster was operated for extended durations at various operating conditions and was unavailable for measurements because the test is ongoing. As a result, the NEXT LDT1 estimates presented herein are considered preliminary until the results of future posttest analyses are incorporated. The worst-case impact of carbon back-sputtering was determined to be the full power operating condition, but the maximum impact of back-sputtered carbon was only a four percent reduction in wear rate. As a result, back-sputtered carbon is estimated to have an insignificant impact on the first failure mode of the NEXT LDT at all operating conditions.
Venus Surface Power and Cooling System Design
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Mellott, Kenneth D.
2004-01-01
A radioisotope power and cooling system is designed to provide electrical power for the a probe operating on the surface of Venus. Most foreseeable electronics devices and sensors simply cannot operate at the 450 C ambient surface temperature of Venus. Because the mission duration is substantially long and the use of thermal mass to maintain an operable temperature range is likely impractical, some type of active refrigeration may be required to keep certain components at a temperature below ambient. The fundamental cooling requirements are comprised of the cold sink temperature, the hot sink temperature, and the amount of heat to be removed. In this instance, it is anticipated that electronics would have a nominal operating temperature of 300 C. Due to the highly thermal convective nature of the high-density atmosphere, the hot sink temperature was assumed to be 50 C, which provided a 500 C temperature of the cooler's heat rejecter to the ambient atmosphere. The majority of the heat load on the cooler is from the high temperature ambient surface environment on Venus. Assuming 5 cm radial thickness of ceramic blanket insulation, the ambient heat load was estimated at approximately 77 watts. With an estimated quantity of 10 watts of heat generation from electronics and sensors, and to accommodate some level of uncertainty, the total heat load requirement was rounded up to an even 100 watts. For the radioisotope Stirling power converter configuration designed, the Sage model predicts a thermodynamic power output capacity of 478.1 watts, which slightly exceeds the required 469.1 watts. The hot sink temperature is 1200 C, and the cold sink temperature is 500 C. The required heat input is 1740 watts. This gives a thermodynamic efficiency of 27.48 %. The maximum theoretically obtainable efficiency is 47.52 %. It is estimated that the mechanical efficiency of the power converter design is on the order of 85 %, based on experimental measurements taken from 500 watt power class, laboratory-tested Stirling engines at GRC. The overall efficiency is calculated to be 23.36 %. The mass of the power converter is estimated at approximately 21.6 kg.
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
40 CFR 57.203 - Contents of the application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission of sulfur dioxide; the characteristics of all gas streams emitted from the smelter's process...'s maximum daily production capacity (as defined in § 57.103(r)), the operational rate (in pounds of... smelter is operating at that capacity; and the smelter's average and maximum daily production rate for...
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Y. M.; College of Physics and Technology, Guangxi Normal University, Guilin, GuangXi; Chen, L. Y.
2014-05-07
A remarkable magnetostriction λ{sub 111} as large as 6700 ppm was found at 70 K in PrFe{sub 1.9} alloy. This value is even larger than the theoretical maximum of 5600 ppm estimated by the Steven's equivalent operator method. The temperature dependence of λ{sub 111} for PrFe{sub 1.9} and TbFe{sub 2} alloys follows well with the single-ion theory rule, which yields giant estimated λ{sub 111} values of about 8000 and 4200 ppm for PrFe{sub 1.9} and TbFe{sub 2} alloys, respectively, at 0 K. The easy magnetization direction of PrFe{sub 1.9} changes from [111] to [100] as temperature decreases, which leads to the abnormal decrease of themore » magnetostriction λ. The rare earth sublattice moment increases sharply in PrFe{sub 1.9} alloy with decreasing temperature, resulting in the remarkably largest estimated value of λ{sub 111} at 0 K according to the single-ion theory.« less
Estimating potency for the Emax-model without attaining maximal effects.
Schoemaker, R C; van Gerven, J M; Cohen, A F
1998-10-01
The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.
[Estimation of Maximum Entrance Skin Dose during Cerebral Angiography].
Kawauchi, Satoru; Moritake, Takashi; Hayakawa, Mikito; Hamada, Yusuke; Sakuma, Hideyuki; Yoda, Shogo; Satoh, Masayuki; Sun, Lue; Koguchi, Yasuhiro; Akahane, Keiichi; Chida, Koichi; Matsumaru, Yuji
2015-09-01
Using radio-photoluminescence glass dosimeter, we measured the entrance skin dose (ESD) in 46 cases and analyzed the correlations between maximum ESD and angiographic parameters [total fluoroscopic time (TFT); number of digital subtraction angiography (DSA) frames, air kerma at the interventional reference point (AK), and dose-area product (DAP)] to estimate the maximum ESD in real time. Mean (± standard deviation) maximum ESD, dose of the right lens, and dose of the left lens were 431.2 ± 135.8 mGy, 33.6 ± 15.5 mGy, and 58.5 ± 35.0 mGy, respectively. Correlation coefficients (r) between maximum ESD and TFT, number of DSA frames, AK, and DAP were r=0.379 (P<0.01), r=0.702 (P<0.001), r=0.825 (P<0.001), and r=0.709 (P<0.001), respectively. AK was identified as the most useful parameter for real-time prediction of maximum ESD. This study should contribute to the development of new diagnostic reference levels in our country.
The maximum economic depth of groundwater abstraction for irrigation
NASA Astrophysics Data System (ADS)
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of maximum economic depth will be combined with estimates of groundwater depth and storage coefficients to estimate economically attainable groundwater volumes worldwide.
NASA Technical Reports Server (NTRS)
Cook, Harvey A; Heinicke, Orville H; Haynie, William H
1947-01-01
An investigation was conducted on a full-scale air-cooled cylinder in order to establish an effective means of maintaining maximum-economy spark timing with varying engine operating conditions. Variable fuel-air-ratio runs were conducted in which relations were determined between the spark travel, and cylinder-pressure rise. An instrument for controlling spark timing was developed that automatically maintained maximum-economy spark timing with varying engine operating conditions. The instrument also indicated the occurrence of preignition.
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263
Effects of time-shifted data on flight determined stability and control derivatives
NASA Technical Reports Server (NTRS)
Steers, S. T.; Iliff, K. W.
1975-01-01
Flight data were shifted in time by various increments to assess the effects of time shifts on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there was a considerable time shift in the data. Time shifts degraded the estimates of the derivatives, but the degradation was in a consistent rather than a random pattern. Time shifts in the control variables caused the most degradation, and the lateral-directional rotary derivatives were affected the most by time shifts in any variable.
Weng, H Y; Yadav, S; Olynk Widmar, N J; Croney, C; Ash, M; Cooper, M
2017-03-01
A stochastic risk model was developed to estimate the time elapsed before overcrowding (TOC) or feed interruption (TFI) emerged on the swine premises under movement restrictions during a classical swine fever (CSF) outbreak in Indiana, USA. Nursery (19 to 65 days of age) and grow-to-finish (40 to 165 days of age) pork production operations were modelled separately. Overcrowding was defined as the total weight of pigs on premises exceeding 100% to 115% of the maximum capacity of the premises, which was computed as the total weight of the pigs at harvest/transition age. Algorithms were developed to estimate age-specific weight of the pigs on premises and to compare the daily total weight of the pigs with the threshold weight defining overcrowding to flag the time when the total weight exceeded the threshold (i.e. when overcrowding occurred). To estimate TFI, an algorithm was constructed to model a swine producer's decision to discontinue feed supply by incorporating the assumptions that a longer estimated epidemic duration, a longer time interval between the age of pigs at the onset of the outbreak and the harvest/transition age, or a longer progression of an ongoing outbreak would increase the probability of a producer's decision to discontinue the feed supply. Adverse animal welfare conditions were modelled to emerge shortly after an interruption of feed supply. Simulations were run with 100 000 iterations each for a 365-day period. Overcrowding occurred in all simulated iterations, and feed interruption occurred in 30% of the iterations. The median (5th and 95th percentiles) TOC was 24 days (10, 43) in nursery operations and 78 days (26, 134) in grow-to-finish operations. Most feed interruptions, if they emerged, occurred within 15 days of an outbreak. The median (5th and 95th percentiles) time at which either overcrowding or feed interruption emerged was 19 days (4, 42) in nursery and 57 days (4, 130) in grow-to-finish operations. The study findings suggest that overcrowding and feed interruption could emerge early during a CSF outbreak among swine premises under movement restrictions. The outputs derived from the risk model could be used to estimate and evaluate associated mitigation strategies for alleviating adverse animal welfare conditions resulting from movement restrictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel S.; Linte, Cristian; Chen, Elvis C. S.
Purpose: Although robot-assisted coronary artery bypass grafting (RA-CABG) has gained more acceptance worldwide, its success still depends on the surgeon's experience and expertise, and the conversion rate to full sternotomy is in the order of 15%-25%. One of the reasons for conversion is poor pre-operative planning, which is based solely on pre-operative computed tomography (CT) images. In this paper, the authors propose a technique to estimate the global peri-operative displacement of the heart and to predict the intra-operative target vessel location, validated via both an in vitro and a clinical study. Methods: As the peri-operative heart migration during RA-CABG hasmore » never been reported in the literatures, a simple in vitro validation study was conducted using a heart phantom. To mimic the clinical workflow, a pre-operative CT as well as peri-operative ultrasound images at three different stages in the procedure (Stage{sub 0}--following intubation; Stage{sub 1}--following lung deflation; and Stage{sub 2}--following thoracic insufflation) were acquired during the experiment. Following image acquisition, a rigid-body registration using iterative closest point algorithm with the robust estimator was employed to map the pre-operative stage to each of the peri-operative ones, to estimate the heart migration and predict the peri-operative target vessel location. Moreover, a clinical validation of this technique was conducted using offline patient data, where a Monte Carlo simulation was used to overcome the limitations arising due to the invisibility of the target vessel in the peri-operative ultrasound images. Results: For the in vitro study, the computed target registration error (TRE) at Stage{sub 0}, Stage{sub 1}, and Stage{sub 2} was 2.1, 3.3, and 2.6 mm, respectively. According to the offline clinical validation study, the maximum TRE at the left anterior descending (LAD) coronary artery was 4.1 mm at Stage{sub 0}, 5.1 mm at Stage{sub 1}, and 3.4 mm at Stage{sub 2}. Conclusions: The authors proposed a method to measure and validate peri-operative shifts of the heart during RA-CABG. In vitro and clinical validation studies were conducted and yielded a TRE in the order of 5 mm for all cases. As the desired clinical accuracy imposed by this procedure is on the order of one intercostal space (10-15 mm), our technique suits the clinical requirements. The authors therefore believe this technique has the potential to improve the pre-operative planning by updating peri-operative migration patterns of the heart and, consequently, will lead to reduced conversion to conventional open thoracic procedures.« less
[The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].
Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R
1996-02-01
To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.
Virtual estimates of fastening strength for pedicle screw implantation procedures
NASA Astrophysics Data System (ADS)
Linte, Cristian A.; Camp, Jon J.; Augustine, Kurt E.; Huddleston, Paul M.; Robb, Richard A.; Holmes, David R.
2014-03-01
Traditional 2D images provide limited use for accurate planning of spine interventions, mainly due to the complex 3D anatomy of the spine and close proximity of nerve bundles and vascular structures that must be avoided during the procedure. Our previously developed clinician-friendly platform for spine surgery planning takes advantage of 3D pre-operative images, to enable oblique reformatting and 3D rendering of individual or multiple vertebrae, interactive templating, and placement of virtual pedicle implants. Here we extend the capabilities of the planning platform and demonstrate how the virtual templating approach not only assists with the selection of the optimal implant size and trajectory, but can also be augmented to provide surrogate estimates of the fastening strength of the implanted pedicle screws based on implant dimension and bone mineral density of the displaced bone substrate. According to the failure theories, each screw withstands a maximum holding power that is directly proportional to the screw diameter (D), the length of the in-bone segm,ent of the screw (L), and the density (i.e., bone mineral density) of the pedicle body. In this application, voxel intensity is used as a surrogate measure of the bone mineral density (BMD) of the pedicle body segment displaced by the screw. We conducted an initial assessment of the developed platform using retrospective pre- and post-operative clinical 3D CT data from four patients who underwent spine surgery, consisting of a total of 26 pedicle screws implanted in the lumbar spine. The Fastening Strength of the planned implants was directly assessed by estimating the intensity - area product across the pedicle volume displaced by the virtually implanted screw. For post-operative assessment, each vertebra was registered to its homologous counterpart in the pre-operative image using an intensity-based rigid registration followed by manual adjustment. Following registration, the Fastening Strength was computed for each displaced bone segment. According to our preliminary clinical study, a comparison between Fastening Strength, displaced bone volume and mean voxel intensity showed similar results (p < 0.1) between the virtually templated plans and the post-operative outcome following the traditional clinical approach. This study has demonstrated the feasibility of the platform in providing estimates the pedicle screw fastening strength via virtual implantation, given the intrinsic vertebral geometry and bone mineral density, enabling the selection of the optimal implant dimension adn trajectory for improved strength.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe
1989-01-01
PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...
Minimax estimation of qubit states with Bures risk
NASA Astrophysics Data System (ADS)
Acharya, Anirudh; Guţă, Mădălin
2018-04-01
The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.
Estimating landscape carrying capacity through maximum clique analysis
Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.
2012-01-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-01-01
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Estimation of Compaction Parameters Based on Soil Classification
NASA Astrophysics Data System (ADS)
Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.
2018-02-01
Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.
Merged and corrected 915 MHz Radar Wind Profiler moments
Jonathan Helmus,Virendra Ghate, Frederic Tridon
2014-06-25
The radar wind profiler (RWP) present at the SGP central facility operates at 915 MHz and was reconfigured in early 2011, to collect key sets of measurements for precipitation and boundary layer studies. The RWP is configured to run in two main operating modes: a precipitation (PR) mode with frequent vertical observations and a boundary layer (BL) mode that is similar to what has been traditionally applied to RWPs. To address issues regarding saturation of the radar signal, range resolution and maximum range, the RWP PR mode is set to operate with two different pulse lengths, termed as short pulse (SP) and long pulse (LP). Please refer to the RWP handbook (Coulter, 2012) for further information. Data from the RWP PR-SP and PR-LP modes have been extensively used to study deep precipitating clouds, especially their dynamical structure as the RWP data does not suffer from signal attenuation during these conditions (Giangrande et al., 2013). Tridon et al. (2013) used the data collected during the Mid-latitude Continental Convective Cloud Experiment (MC3E) to improve the estimation of noise floor of the RWP recorded Doppler spectra.
Incorporation of Half-Cycle Theory Into Ko Aging Theory for Aerostructural Flight-Life Predictions
NASA Technical Reports Server (NTRS)
Ko, William L.; Tran, Van T.; Chen, Tony
2007-01-01
The half-cycle crack growth theory was incorporated into the Ko closed-form aging theory to improve accuracy in the predictions of operational flight life of failure-critical aerostructural components. A new crack growth computer program was written for reading the maximum and minimum loads of each half-cycle from the random loading spectra for crack growth calculations and generation of in-flight crack growth curves. The unified theories were then applied to calculate the number of flights (operational life) permitted for B-52B pylon hooks and Pegasus adapter pylon hooks to carry the Hyper-X launching vehicle that air launches the X-43 Hyper-X research vehicle. A crack growth curve for each hook was generated for visual observation of the crack growth behavior during the entire air-launching or captive flight. It was found that taxiing and the takeoff run induced a major portion of the total crack growth per flight. The operational life theory presented can be applied to estimate the service life of any failure-critical structural components.
REopt: A Platform for Energy System Integration and Optimization: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpkins, T.; Cutler, D.; Anderson, K.
2014-08-01
REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less
Operating characteristics of a new ion source for KSTAR neutral beam injection system.
Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul
2014-02-01
A new positive ion source for the Korea Superconducting Tokamak Advanced Research neutral beam injection (KSTAR NBI-1) system was designed, fabricated, and assembled in 2011. The characteristics of the arc discharge and beam extraction were investigated using hydrogen and helium gas to find the optimum operating parameters of the arc power, filament voltage, gas pressure, extracting voltage, accelerating voltage, and decelerating voltage at the neutral beam test stand at the Korea Atomic Energy Research Institute in 2012. Based on the optimum operating condition, the new ion source was then conditioned, and performance tests were primarily finished. The accelerator system with enlarged apertures can extract a maximum 65 A ion beam with a beam energy of 100 keV. The arc efficiency and optimum beam perveance, at which the beam divergence is at a minimum, are estimated to be 1.0 A/kW and 2.5 uP, respectively. The beam extraction tests show that the design goal of delivering a 2 MW deuterium neutral beam into the KSTAR Tokamak plasma is achievable.
Fine and ultrafine particle doses in the respiratory tract from digital printing operations.
Voliotis, Aristeidis; Karali, Irene; Kouras, Athanasios; Samara, Constantini
2017-01-01
In this study, we report for the first time particle number doses in different parts of the human respiratory tract and real-time deposition rates for particles in the 10 nm to 10 μm size range emitted by digital printing operations. Particle number concentrations (PNCs) and size distribution were measured in a typical small-sized printing house using a NanoScan scanning mobility particle sizer and an optical particle sizer. Particle doses in human lung were estimated applying a multiple-path particle dosimetry model under two different breathing scenarios. PNC was dominated by the ultrafine particle fractions (UFPs, i.e., particles smaller than 100 nm) exhibiting almost nine times higher levels in comparison to the background values. The average deposition rate fοr each scenario in the whole lung was estimated at 2.0 and 2.9 × 10 7 particles min -1 , while the respective highest particle dose in the tracheobronchial tree (2.0 and 2.9 × 10 9 particles) was found for diameter of 50 nm. The majority of particles appeared to deposit in the acinar region and most of them were in the UFP size range. For both scenarios, the maximum deposition density (9.5 × 10 7 and 1.5 × 10 8 particles cm -2 ) was observed at the lobar bronchi. Overall, the differences in the estimated particle doses between the two scenarios were 30-40% for both size ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.
2013-07-01
Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less
Real-time antenna fault diagnosis experiments at DSS 13
NASA Technical Reports Server (NTRS)
Mellstrom, J.; Pierson, C.; Smyth, P.
1992-01-01
Experimental results obtained when a previously described fault diagnosis system was run online in real time at the 34-m beam waveguide antenna at Deep Space Station (DSS) 13 are described. Experimental conditions and the quality of results are described. A neural network model and a maximum-likelihood Gaussian classifier are compared with and without a Markov component to model temporal context. At the rate of a state update every 6.4 seconds, over a period of roughly 1 hour, the neural-Markov system had zero errors (incorrect state estimates) while monitoring both faulty and normal operations. The overall results indicate that the neural-Markov combination is the most accurate model and has significant practical potential.
The use of Landsat data to inventory cotton and soybean acreage in North Alabama
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Faust, N. L.
1980-01-01
This study was performed to determine if Landsat data could be used to improve the accuracy of the estimation of cotton acreage. A linear classification algorithm and a maximum likelihood algorithm were used for computer classification of the area, and the classification was compared with ground truth. The classification accuracy for some fields was greater than 90 percent; however, the overall accuracy was 71 percent for cotton and 56 percent for soybeans. The results of this research indicate that computer analysis of Landsat data has potential for improving upon the methods presently being used to determine cotton acreage; however, additional experiments and refinements are needed before the method can be used operationally.
Flight data processing with the F-8 adaptive algorithm
NASA Technical Reports Server (NTRS)
Hartmann, G.; Stein, G.; Petersen, K.
1977-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described
A Reduced Model for Prediction of Thermal and Rotational Effects on Turbine Tip Clearance
NASA Technical Reports Server (NTRS)
Kypuros, Javier A.; Melcher, Kevin J.
2003-01-01
This paper describes a dynamic model that was developed to predict changes in turbine tip clearance the radial distance between the end of a turbine blade and the abradable tip seal. The clearance is estimated by using a first principles approach to model the thermal and mechanical effects of engine operating conditions on the turbine sub-components. These effects are summed to determine the resulting clearance. The model is demonstrated via a ground idle to maximum power transient and a lapse-rate takeoff transient. Results show the model demonstrates the expected pinch point behavior. The paper concludes by identifying knowledge gaps and suggesting additional research to improve the model.
NASA Astrophysics Data System (ADS)
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Beyond SaGMRotI: Conversion to SaArb, SaSN, and SaMaxRot
Watson-Lamprey, J. A.; Boore, D.M.
2007-01-01
In the seismic design of structures, estimates of design forces are usually provided to the engineer in the form of elastic response spectra. Predictive equations for elastic response spectra are derived from empirical recordings of ground motion. The geometric mean of the two orthogonal horizontal components of motion is often used as the response value in these predictive equations, although it is not necessarily the most relevant estimate of forces within the structure. For some applications it is desirable to estimate the response value on a randomly chosen single component of ground motion, and in other applications the maximum response in a single direction is required. We give adjustment factors that allow converting the predictions of geometric-mean ground-motion predictions into either of these other two measures of seismic ground-motion intensity. In addition, we investigate the relation of the strike-normal component of ground motion to the maximum response values. We show that the strike-normal component of ground motion seldom corresponds to the maximum horizontal-component response value (in particular, at distances greater than about 3 km from faults), and that focusing on this case in exclusion of others can result in the underestimation of the maximum component. This research provides estimates of the maximum response value of a single component for all cases, not just near-fault strike-normal components. We provide modification factors that can be used to convert predictions of ground motions in terms of the geometric mean to the maximum spectral acceleration (SaMaxRot) and the random component of spectral acceleration (SaArb). Included are modification factors for both the mean and the aleatory standard deviation of the logarithm of the motions.
NASA Astrophysics Data System (ADS)
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Estimating Rhododendron maximum L. (Ericaceae) Canopy Cover Using GPS/GIS Technology
Tyler J. Tran; Katherine J. Elliott
2012-01-01
In the southern Appalachians, Rhododendron maximum L. (Ericaceae) is a key evergreen understory species, often forming a subcanopy in forest stands. Little is known about the significance of R. maximum cover in relation to other forest structural variables. Only recently have studies used Global Positioning System (GPS) technology...
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Choi, Sangjun; Kang, Dongmug; Park, Donguk; Lee, Hyunhee; Choi, Bongkyoo
2017-03-01
The goal of this study is to develop a general population job-exposure matrix (GPJEM) on asbestos to estimate occupational asbestos exposure levels in the Republic of Korea. Three Korean domestic quantitative exposure datasets collected from 1984 to 2008 were used to build the GPJEM. Exposure groups in collected data were reclassified based on the current Korean Standard Industrial Classification (9 th edition) and the Korean Standard Classification of Occupations code (6 th edition) that is in accordance to international standards. All of the exposure levels were expressed by weighted arithmetic mean (WAM) and minimum and maximum concentrations. Based on the established GPJEM, the 112 exposure groups could be reclassified into 86 industries and 74 occupations. In the 1980s, the highest exposure levels were estimated in "knitting and weaving machine operators" with a WAM concentration of 7.48 fibers/mL (f/mL); in the 1990s, "plastic products production machine operators" with 5.12 f/mL, and in the 2000s "detergents production machine operators" handling talc containing asbestos with 2.45 f/mL. Of the 112 exposure groups, 44 groups had higher WAM concentrations than the Korean occupational exposure limit of 0.1 f/mL. The newly constructed GPJEM which is generated from actual domestic quantitative exposure data could be useful in evaluating historical exposure levels to asbestos and could contribute to improved prediction of asbestos-related diseases among Koreans.
Priya, S; Srinivasan, P; Gopalakrishnan, R K
2012-01-01
The thoria dissolver, used for separation of (233)U from reactor-irradiated thorium metal and thorium oxide rods, is no longer operational. It was decided to carry out assessment of the radiological status of the dissolver cell for planning of the future decommissioning/dismantling operations. The dissolver interiors are expected to be contaminated with the dissolution remains of irradiated thorium oxide rods in addition to some of the partially dissolved thoria pellets. Hence, (220)Rn, a daughter product of (228)Th is of major radiological concern. Airborne activity of thoron daughters (212)Pb (Th-B) and (212)Bi (Th-C) was estimated by air sampling followed by high-resolution gamma spectrometry of filter papers. By measuring the full-energy peaks counts in the energy windows of (212)Pb, (212)Bi and (208)Tl, concentrations of thoron progeny in the sampled air were estimated by applying the respective intrinsic peak efficiency factors and suitable correction factors for the equilibration effects of (212)Pb and (212)Bi in the filter paper during the delay between sampling and counting. Then the thoron working level (TWL) was evaluated using the International Commission on Radiological Protection (ICRP) methodology. Finally, the potential effective dose to the workers, due to inhalation of thoron and its progeny during dismantling operations was assessed by using dose conversion factors recommended by ICRP. Analysis of filter papers showed a maximum airborne thoron progeny concentration of 30 TWLs inside the dissolver.
The Significance of the Record Length in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Senarath, S. U.
2013-12-01
Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s-1 km-2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
Sodankylä ionospheric tomography data set 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Roininen, Lassi; Kero, Antti; Raita, Tero; Ulich, Thomas; Markkanen, Markku; Juusola, Liisa; Kauristie, Kirsti
2016-07-01
Sodankylä Geophysical Observatory has been operating a receiver network for ionospheric tomography and collecting the produced data since 2003. The collected data set consists of phase difference curves measured from COSMOS navigation satellites from the Russian Parus network (Wood and Perry, 1980) and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three to five operational stations at the Fennoscandia sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this data set, one solar cycle of ionospheric VTEC estimates is constructed. The measurements are compared against the International Reference Ionosphere (IRI)-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model results are on average 40 % higher than that of the tomographic results.
Sodankylä ionospheric tomography dataset 2003-2014
NASA Astrophysics Data System (ADS)
Norberg, J.; Roininen, L.; Kero, A.; Raita, T.; Ulich, T.; Markkanen, M.; Juusola, L.; Kauristie, K.
2015-12-01
Sodankylä Geophysical Observatory has been operating a tomographic receiver network and collecting the produced data since 2003. The collected dataset consists of phase difference curves measured from Russian COSMOS dual-frequency (150/400 MHz) low-Earth-orbit satellite signals, and tomographic electron density reconstructions obtained from these measurements. In this study vertical total electron content (VTEC) values are integrated from the reconstructed electron densities to make a qualitative and quantitative analysis to validate the long-term performance of the tomographic system. During the observation period, 2003-2014, there were three-to-five operational stations at the Fenno-Scandinavian sector. Altogether the analysis consists of around 66 000 overflights, but to ensure the quality of the reconstructions, the examination is limited to cases with descending (north to south) overflights and maximum elevation over 60°. These constraints limit the number of overflights to around 10 000. Based on this dataset, one solar cycle of ionospheric vertical total electron content estimates is constructed. The measurements are compared against International Reference Ionosphere IRI-2012 model, F10.7 solar flux index and sunspot number data. Qualitatively the tomographic VTEC estimate corresponds to reference data very well, but the IRI-2012 model are on average 40 % higher of that of the tomographic results.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Steinmetz, G. G.
1972-01-01
A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
Photovoltaic array: Power conditioner interface characteristics
NASA Technical Reports Server (NTRS)
Gonzalez, C. C.; Hill, G. M.; Ross, R. G., Jr.
1982-01-01
The electrical output (power, current, and voltage) of flat plate solar arrays changes constantly, due primarily to changes in cell temperature and irradiance level. As a result, array loads such as dc-to-ac power conditioners must be capable of accommodating widely varying input levels while maintaining operation at or near the maximum power point of the array. The array operating characteristics and extreme output limits necessary for the systematic design of array load interfaces under a wide variety of climatic conditions are studied. A number of interface parameters are examined, including optimum operating voltage, voltage energy, maximum power and current limits, and maximum open circuit voltage. The effect of array degradation and I-V curve fill factor or the array power conditioner interface is also discussed. Results are presented as normalized ratios of power conditioner parameters to array parameters, making the results universally applicable to a wide variety of system sizes, sites, and operating modes.
NASA Astrophysics Data System (ADS)
Cenci, Luca; Boni, Giorgio; Pulvirenti, Luca; Gabellani, Simone; Gardella, Fabio; Squicciarino, Giuseppe; Pierdicca, Nazzareno; Benedetto, Catia
2016-04-01
In a reservoir, water level monitoring is important for emergency management purposes. This information can be used to estimate the degree of filling of the water body, thus helping decision makers in flood control operations. Furthermore, if assimilated in hydrological models and coupled with rainfall forecasts, this information can be used for flood forecast and early warning. In many cases, water level is not known (e.g. data-scarce environments), or not shared by operators. Remote sensing may allow overcoming these limitations, enabling its estimation. The objective of this work is to present the Shoreline to Height (S2H) algorithm, developed to retrieve the height of the water stored in reservoirs from satellite images. To this aim, some auxiliary data are needed: a DEM and the maximum/minimum height that can be reached by the water. In data-scarce environments, these information can be easily obtained on the Internet (e.g. free, worldwide DEM and design data for artificial reservoirs). S2H was tested with different satellite data, both optical and SAR (Landsat and Cosmo SkyMed®-CSK®) in order to assess the impact of different sensors on the final estimates. The study area was the Place-Moulin Lake (Valle d'Aosta-VdA, Italy), where it is present a monitoring network that can provide reliable ground-truths for validating the algorithm and assessing its accuracy. When the algorithm was developed, it was assumed to be in absence of any "official"-auxiliary data. Therefore, two DEMs (SRTM 1 arc-second and ASTER GDEM) were used to evaluate their performances. The maximum/minimum water height values were found on the website of VdA Region. The S2H is based on three steps: i) satellite data preprocessing (Landsat: atmospheric correction; CSK®: geocoding and speckle filtering); ii) water mask generation (using a thresholding and region growing algorithm) and shoreline extraction; iii) retrieval of the shoreline height according to the reference DEMs (adopting a statistical approach). The algorithm was tested for different water heights and results were compared against ground-truths. Findings showed that the combination CSK®-SRTM provided more reliable results. It was also found that the overall quality of the estimates increases as the water height increases, reaching an accuracy up to some centimetres. This result is particularly interesting for flood control applications, where it is important to be accurate when the reservoir's degree of filling is high. The potentialities of S2H for operational hydrology purposes were tested in a real-case simulation, in which the river discharge's prediction downstream of the dam was needed for flood risk management purposes. The water height value retrieved with S2H was assimilated within a semi-distributed, event-based, hydrological model (DRiFt) by using a simple direct insertion algorithm. DRiFt is usually run in operative way on the reservoir by using ground-truths as input data. The result of the data assimilation experiment was compared with the "real", operative run of the model. Findings showed a high agreement between the two simulations, proving the utility/quality of the S2H algorithm. "Project carried out using CSK® Products, © of the Italian Space Agency (ASI), delivered under a license to use by ASI."
Development and application of the maximum entropy method and other spectral estimation techniques
NASA Astrophysics Data System (ADS)
King, W. R.
1980-09-01
This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.
Modeling operators' emergency response time for chemical processing operations.
Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam
2014-01-01
Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.
A modified ATI technique for nowcasting convective rain volumes over areas. [area-time integrals
NASA Technical Reports Server (NTRS)
Makarau, Amos; Johnson, L. Ronald; Doneaud, Andre A.
1988-01-01
This paper explores the applicability of the area-time-integral (ATI) technique for the estimation of the growth portion only of a convective storm (while the rain volume is computed using the entire life history of the event) and for nowcasting the total rain volume of a convective system at the stage of its maximum development. For these purposes, the ATIs were computed from the digital radar data (for 1981-1982) from the North Dakota Cloud Modification Project, using the maximum echo area (ATIA) no less than 25 dBz, the maximum reflectivity, and the maximum echo height as the end of the growth portion of the convective event. Linear regression analysis demonstrated that correlations between total rain volume or the maximum rain volume versus ATIA were the strongest. The uncertainties obtained were comparable to the uncertainties which typically occur in rain volume estimates obtained from radar data employing Z-R conversion followed by space and time integration. This demonstrates that the total rain volume of a storm can be nowcasted at its maximum stage of development.