NASA Technical Reports Server (NTRS)
Sforzini, R. H.
1972-01-01
An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.
On a Solar Origin for the Cosmogenic Nuclide Event of 775 A.D.
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Tylka, A. J.; Dietrich, W. F.; Ling, A. G.
2014-01-01
We explore requirements for a solar particle event (SPE) and flare capable of producing the cosmogenic nuclide event of 775 A.D., and review solar circumstances at that time. A solar source for 775 would require a greater than 1 GV spectrum approximately 45 times stronger than that of the intense high-energy SPE of 1956 February 23. This implies a greater than 30 MeV proton fluence (F(sub 30)) of approximately 8 × 10(exp 10) proton cm(exp -2), approximately 10 times larger than that of the strongest 3 month interval of SPE activity in the modern era. This inferred F(sub 30) value for the 775 SPE is inconsistent with the occurrence probability distribution for greater than 30 MeV solar proton events. The best guess value for the soft X-ray classification (total energy) of an associated flare is approximately X230 (approximately 9 × 10(exp 33) erg). For comparison, the flares on 2003 November 4 and 1859 September 1 had observed/inferred values of approximately X35 (approximately 10(exp 33) erg) and approximately X45 (approximately 2 × 10(exp 33) erg), respectively. The estimated size of the source active region for a approximately 10(exp 34) erg flare is approximately 2.5 times that of the largest region yet recorded. The 775 event occurred during a period of relatively low solar activity, with a peak smoothed amplitude about half that of the second half of the 20th century. The approximately 1945-1995 interval, the most active of the last approximately 2000 yr, failed to witness a SPE comparable to that required for the proposed solar event in 775. These considerations challenge a recent suggestion that the 775 event is likely of solar origin.
NASA Technical Reports Server (NTRS)
Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
1998-01-01
Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.
A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.
2004-01-01
The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.
An E-Portfolio to Enhance Sustainable Vocabulary Learning in English
ERIC Educational Resources Information Center
Tanaka, Hiroya; Yonesaka, Suzanne M.; Ueno, Yukie
2015-01-01
Vocabulary is an area that requires foreign language learners to work independently and continuously both in and out of class. In the Japanese EFL setting, for example, more than 97% of the population experiences approximately six years of English education at secondary school during which time they are required to learn approximately 3,000 words…
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.
Choi, Yun Ho; Yoo, Sung Jin
2016-12-01
A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations
NASA Astrophysics Data System (ADS)
Mansfield, Christopher M.; Shoemaker, Christine A.
1999-05-01
This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.
Yamanouchi, Satoshi; Ishii, Tadashi; Morino, Kazuma; Furukawa, Hajime; Hozawa, Atsushi; Ochi, Sae; Kushimoto, Shigeki
2014-12-01
When disasters that affect a wide area occur, external medical relief teams play a critical role in the affected areas by helping to alleviate the burden caused by surging numbers of individuals requiring health care. Despite this, no system has been established for managing deployed medical relief teams during the subacute phase following a disaster. After the Great East Japan Earthquake and tsunami, the Ishinomaki Medical Zone was the most severely-affected area. Approximately 6,000 people died or were missing, and the immediate evacuation of approximately 120,000 people to roughly 320 shelters was required. As many as 59 medical teams came to participate in relief activities. Daily coordination of activities and deployment locations became a significant burden to headquarters. The Area-based/Line-linking Support System (Area-Line System) was thus devised to resolve these issues for medical relief and coordinating activities. A retrospective analysis was performed to examine the effectiveness of the medical relief provided to evacuees using the Area-Line System with regards to the activities of the medical relief teams and the coordinating headquarters. The following were compared before and after establishment of the Area-Line System: (1) time required at the coordinating headquarters to collect and tabulate medical records from shelters visited; (2) time required at headquarters to determine deployment locations and activities of all medical relief teams; and (3) inter-area variation in number of patients per team. The time required to collect and tabulate medical records was reduced from approximately 300 to 70 minutes/day. The number of teams at headquarters required to sort through data was reduced from 60 to 14. The time required to determine deployment locations and activities of the medical relief teams was reduced from approximately 150 hours/month to approximately 40 hours/month. Immediately prior to establishment of the Area-Line System, the variation of the number of patients per team was highest. Variation among regions did not increase after establishment of the system. This descriptive analysis indicated that implementation of the Area-Line System, a systematic approach for long-term disaster medical relief across a wide area, can increase the efficiency of relief provision to disaster-stricken areas.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Adaptation of the Carter-Tracy water influx calculation to groundwater flow simulation
Kipp, Kenneth L.
1986-01-01
The Carter-Tracy calculation for water influx is adapted to groundwater flow simulation with additional clarifying explanation not present in the original papers. The Van Everdingen and Hurst aquifer-influence functions for radial flow from an outer aquifer region are employed. This technique, based on convolution of unit-step response functions, offers a simple but approximate method for embedding an inner region of groundwater flow simulation within a much larger aquifer region where flow can be treated in an approximate fashion. The use of aquifer-influence functions in groundwater flow modeling reduces the size of the computational grid with a corresponding reduction in computer storage and execution time. The Carter-Tracy approximation to the convolution integral enables the aquifer influence function calculation to be made with an additional storage requirement of only two times the number of boundary nodes more than that required for the inner region simulation. It is a good approximation for constant flow rates but is poor for time-varying flow rates where the variation is large relative to the mean. A variety of outer aquifer region geometries, exterior boundary conditions, and flow rate versus potentiometric head relations can be used. The radial, transient-flow case presented is representative. An analytical approximation to the functions of Van Everdingen and Hurst for the dimensionless potentiometric head versus dimensionless time is given.
Airfoil Shape Optimization based on Surrogate Model
NASA Astrophysics Data System (ADS)
Mukesh, R.; Lingadurai, K.; Selvakumar, U.
2018-02-01
Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Multi-Model Validation of Currents in the Chesapeake Bay Region in June 2010
2012-01-01
host “ DaVinci ” at the Naval Oceanographic Office (NAVOCEANO). The same model configuration also took approximately 1 hr of wall clock time for a 72-hr...comparable to the performance Navy DSRC host DaVinci . Products of water level and horizontal current maps as well as station time series, identical to...DSRC host DaVinci and required approximately 5 hrs of wall-clock time for 72-hr forecasts, including data Figure 10. The Chesapeake Bay Delft3D
An analytical technique for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.
77 FR 65016 - NASA Federal Advisory Committees
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-24
... are not full-time positions. Successful nominees will be required to attend meetings of the subcommittee approximately two to four times a year, either in person (NASA covers travel-related expenses for.... NASA's science advisory subcommittees have member vacancies from time to time throughout the year, and...
Greenbaum, Gili
2015-09-07
Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean
NASA Astrophysics Data System (ADS)
Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.
2018-02-01
The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna
2012-03-01
Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.
Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L
2013-12-01
In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broda, Jill Terese
The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the samemore » order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined.« less
Simulation of Simple Controlled Processes with Dead-Time.
ERIC Educational Resources Information Center
Watson, Keith R.; And Others
1985-01-01
The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
Times for interplanetary trips
NASA Technical Reports Server (NTRS)
Jones, R. T.
1976-01-01
The times required to travel to the various planets at an acceleration of one g are calculated. Surrounding gravitational fields are neglected except for a relatively short distance near take-off or landing. The orbit consists of an essentially straight line with the thrust directed toward the destination up to the halfway point, but in the opposite direction for the remainder so that the velocity is zero on arrival. A table lists the approximate times required, and also the maximum velocities acquired in light units v/c for the various planets.
NASA Astrophysics Data System (ADS)
Kilcrease, D. P.; Brookes, S.
2013-12-01
The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.
Goff, M L; Win, B H
1997-11-01
The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.
14 CFR 121.711 - Communication records: Domestic and flag operations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... time of the contact; (2) The flight number; (3) Aircraft registration number; (4) Approximate position... purposes of this section the term en route means from the time the aircraft pushes back from the departing gate until the time the aircraft reaches the arrival gate at its destination. (c) The record required...
Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback
NASA Technical Reports Server (NTRS)
Scott, Michael A.
1996-01-01
Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.
Fusion Propulson System Requirements for an Interstellar Probe
NASA Technical Reports Server (NTRS)
Spencer, D. F.
1963-01-01
An examination of the engine constraints for a fusion-propelled vehicle indicates that minimum flight times for a probe to a 5 light-year star will be approximately 50 years. The principal restraint on the vehicle is the radiator weight and size necessary to dissipate the heat which enters the chamber walls from the fusion plasma. However, it is interesting, at least theoretically, that the confining magnetic field strength is of reasonable magnitude, 2 to 3 x 10(exp5) gauss, and the confinement time is approximately 0.1 sec.
Propagating Qualitative Values Through Quantitative Equations
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
1992-01-01
In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.
NASA Technical Reports Server (NTRS)
Cooper, D. B.; Yalabik, N.
1975-01-01
Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Help Wanted...College Required. ETS Leadership 2000 Series.
ERIC Educational Resources Information Center
Carnevale, Anthony P.
By the time today's eighth graders reach age 28-29, approximately 66% will have had some kind of postsecondary education or training. There has been a dramatic upward shift in the education and skill requirements for all occupations. Access to higher education has become the threshold for career success. Elite managerial and professional jobs,…
Modular thermal analyzer routine, volume 1
NASA Technical Reports Server (NTRS)
Oren, J. A.; Phillips, M. A.; Williams, D. R.
1972-01-01
The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.
Thermal Inactivation of Aerosolized Bacillus subtilis var. niger Spores
Mullican, Charles L.; Buchanan, Lee M.; Hoffman, Robert K.
1971-01-01
A hot-air sterilizer capable of exposing airborne microorganisms to elevated temperatures with an almost instantaneous heating time was developed and evaluated. With this apparatus, aerosolized Bacillus subtilis var. niger spores were killed in about 0.02 sec when exposed to temperatures above 260 C. This is about 500 times faster than killing times reported by others. Extrapolation and comparison of data on the time and temperature required to klll B. subtilis var. niger spores on surfaces show that approximately the same killing time is required as is necessary for spores in air, if corrections are made for the heating time of the surface. PMID:5002138
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Karpel, Mordechay
1989-01-01
Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... of five times per year. To conduct the census, the researchers would travel by foot approximately 1... order to implement the mitigation measures that require real-time monitoring, and to satisfy the... marine mammals by harassment. Section 101(a)(5)(D) of the Act establishes a 45-day time limit for our...
Genetic transformation of tobacco NT1 cells with Agrobacterium tumefaciens.
Mayo, Kristin J; Gonzales, Barbara J; Mason, Hugh S
2006-01-01
This protocol is used to produce stably transformed tobacco (Nicotiana tabacum) NT1 cell lines, using Agrobacterium tumefaciens-mediated DNA delivery of a binary vector containing a gene encoding hepatitis B surface antigen and a gene encoding the kanamycin selection marker. The NT1 cultures, at the appropriate stage of growth, are inoculated with A. tumefaciens containing the binary vector. A 3-day cocultivation period follows, after which the cultures are rinsed and placed on solid selective medium. Transformed colonies ('calli') appear in approximately 4 weeks; they are subcultured until adequate material is obtained for analysis of antigen production. 'Elite' lines are selected based on antigen expression and growth characteristics. The time required for the procedure from preparation of the plant cell materials to callus development is approximately 5 weeks. Growth of selected calli to sufficient quantities for antigen screening may require 4-6 weeks beyond the initial selection. Creation of the plasmid constructs, transformation of the A. tumefaciens line, and ELISA and Bradford assays to assess protein production require additional time.
Building Industries Occupations: Syllabus.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Secondary Curriculum Development.
The Building Industries Occupations course is a two-year program of approximately 160 three-period teaching days per year. The required course content is designed to be effectively taught in 80 percent of the total course time, thus allowing 20 percent of the time for instruction adapted to such local conditions as employment prospects, student…
Chance-constrained economic dispatch with renewable energy and storage
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.; ...
2018-04-19
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
Chance-constrained economic dispatch with renewable energy and storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
Approximate matching of regular expressions.
Myers, E W; Miller, W
1989-01-01
Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.
Production Program - Operational - SNAP 10A Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1961-08-07
This planning report is provided to describe the lead time, approximate costs, and major decisions and approvals required to enter a production program for the 500 watt SNAP 10A nuclear space power system.
Kilcrease, D. P.; Brookes, S.
2013-08-19
The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less
Finite-element time evolution operator for the anharmonic oscillator
NASA Technical Reports Server (NTRS)
Milton, Kimball A.
1995-01-01
The finite-element approach to lattice field theory is both highly accurate (relative errors approximately 1/N(exp 2), where N is the number of lattice points) and exactly unitary (in the sense that canonical commutation relations are exactly preserved at the lattice sites). In this talk I construct matrix elements for dynamical variables and for the time evolution operator for the anharmonic oscillator, for which the continuum Hamiltonian is H = p(exp 2)/2 + lambda q(exp 4)/4. Construction of such matrix elements does not require solving the implicit equations of motion. Low order approximations turn out to be extremely accurate. For example, the matrix element of the time evolution operator in the harmonic oscillator ground state gives a results for the anharmonic oscillator ground state energy accurate to better than 1 percent, while a two-state approximation reduces the error to less than 0.1 percent.
An approximate dynamic programming approach to resource management in multi-cloud scenarios
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo
2017-03-01
The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul
1993-01-01
We present a systematic method for constructing boundary conditions (numerical and physical) of the required accuracy, for compact (Pade-like) high-order finite-difference schemes for hyperbolic systems. First, a roper summation-by-parts formula is found for the approximate derivative. A 'simultaneous approximation term' (SAT) is then introduced to treat the boundary conditions. This procedure leads to time-stable schemes even in the system case. An explicit construction of the fourth-order compact case is given. Numerical studies are presented to verify the efficacy of the approach.
Considering Time-Scale Requirements for the Future
2013-05-01
geocentric reference frame with the SI second realized on the rotating geoid as the scale unit. It is a continuous atomic time scale that was...the B8lycentric and Geocentric Celestial Reference Systems, two time scales, Barycentric Coor- dinate Time (TCB) and Geocentric Coordinate Time (TCG...defined in 2006 as a linear scaling of TCB having the approximate rate of TT. TCG is the time coordinate for the four dimensional geocentric coordinate
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-22
... to qualify to participate as a skilled nursing facility (SNF) in the Medicare program, or as a nursing facility (NF) in the Medicaid program. We are proposing these requirements to ensure that long... According to CMS data, at any point in time, approximately 1.4 million elderly and disabled nursing home...
Zeng, Cheng; Liang, Shan; Xiang, Shuwen
2017-05-01
Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
Miller, M R; Atkins, P R; Pedersen, O F
2003-05-01
Recent evidence suggests that the frequency response requirements for peak expiratory flow (PEF) meters are higher than was first thought and that the American Thoracic Society (ATS) waveforms to test PEF meters may not be adequate for the purpose. The dynamic response of mini-Wright (MW), Vitalograph (V), TruZone (TZ), MultiSpiro (MS) and pneumotachograph (PT) flow meters was tested by delivering two differently shaped flow-time profiles from a computer controlled explosive decompression device fitted with a fast response solenoid valve. These profiles matched population 5th and 95th centiles for rise time from 10% to 90% of PEF and dwell time of flow above 90% PEF. Profiles were delivered five times with identical chamber pressure and solenoid aperture at PEF. Any difference in recorded PEF for the two profiles indicates a poor dynamic response. The absolute (% of mean) flow differences in l/min for the V, MW, and PT PEF meters were 25 (4.7), 20 (3.9), and 2 (0.3), respectively, at PEF approximately 500 l/min, and 25 (10.5), 20 (8.7) and 6 (3.0) at approximately 200 l/min. For TZ and MS meters at approximately 500 l/min the differences were 228 (36.1) and 257 (39.2), respectively, and at approximately 200 l/min they were 51 (23.9) and 1 (0.5). All the meters met ATS accuracy requirements when tested with their waveforms. An improved method for testing the dynamic response of flow meters detects marked overshoot (underdamping) of TZ and MS responses not identified by the 26 ATS waveforms. This error could cause patient misclassification when using such meters with asthma guidelines.
Magnetospheric convection and the high-latitude F2 ionosphere
NASA Technical Reports Server (NTRS)
Knudsen, W. C.
1974-01-01
Behavior of the polar ionospheric F layer as it is convected through the cleft, over the polar cap, and through the nightside F layer trough zone is investigated. Passage through the cleft adds approximately 200,000 ions per cu cm in the vicinity of the F2 peak and redistributes the ionization above approximately 400-km altitude to conform with an increased electron temperature. The redistribution of ionization above 400-km altitude forms the 'averaged' plasma ring seen at 1000-km altitude. The F layer is also raised by approximately 20 km in altitude by the convection electric field. The time required for passage across the polar cap (25 deg) is about the same as that required for the F layer peak concentration to decay by e. The F layer response to passage through the nightside soft electron precipitation zone should be similar to but less than its response to passage through the cleft.
Short report: duration of tick attachment required for transmission of powassan virus by deer ticks.
Ebel, Gregory D; Kramer, Laura D
2004-09-01
Infected deer ticks (Ixodes scapularis) were allowed to attach to naive mice for variable lengths of time to determine the duration of tick attachment required for Powassan (POW) virus transmission to occur. Viral load in engorged larvae detaching from viremic mice and in resulting nymphs was also monitored. Ninety percent of larval ticks acquired POW virus from mice that had been intraperitoneally inoculated with 10(5) plaque-forming units (PFU). Engorged larvae contained approximately 10 PFU. Transstadial transmission efficiency was 22%, resulting in approximately 20% infection in nymphs that had fed as larvae on viremic mice. Titer increased approximately 100-fold during molting. Nymphal deer ticks efficiently transmitted POW virus to naive mice after as few as 15 minutes of attachment, suggesting that unlike Borrelia burgdorferi, Babesia microti, and Anaplasma phagocytophilum, no grace period exists between tick attachment and POW virus transmission.
The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory
NASA Technical Reports Server (NTRS)
Quang, Nguyen Vinh
1996-01-01
For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Gravitational radiation quadrupole formula is valid for gravitationally interacting systems
NASA Technical Reports Server (NTRS)
Walker, M.; Will, C. M.
1980-01-01
An argument is presented for the validity of the quadrupole formula for gravitational radiation energy loss in the far field of nearly Newtonian (e.g., binary stellar) systems. This argument differs from earlier ones in that it determines beforehand the formal accuracy of approximation required to describe gravitationally self-interacting systems, uses the corresponding approximate equation of motion explicitly, and evaluates the appropriate asymptotic quantities by matching along the correct space-time light cones.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Subsonic Aircraft With Regression and Neural-Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2004-01-01
At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
New realisation of Preisach model using adaptive polynomial approximation
NASA Astrophysics Data System (ADS)
Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young
2012-09-01
Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Wetherill, G W
1971-07-30
Considerable information concerning lunar chronology has been obtained by the study of rocks and soil returned by the Apollo 11 and Apollo 12 missions. It has been shown that at the time the moon, earth, and solar system were formed, approximately 4.6 approximately 10(9) years ago, a severe chemical fractionation took place, resulting in depletion of relatively volatile elements such as Rb and Pb from the sources of the lunar rocks studied. It is very likely that much of this material was lost to interplanetary space, although some of the loss may be associated with internal chemical differentiation of the moon. It has also been shown that igneous processes have enriched some regions of the moon in lithophile elements such as Rb, U, and Ba, very early in lunar history, within 100 million years of its formation. Subsequent igneous and metamorphic activity occurred over a long period of time; mare volcanism of the Apollo 11 and Apollo 12 sites occurred at distinctly different times, 3.6 approximately 10(9) and 3.3 approximately 10(9) years ago, respectively. Consequently, lunar magmatism and remanent magnetism cannot be explained in terms of a unique event, such as a close approach to the earth at a time of lunar capture. It is likely that these phenomena will require explanation in terms of internal lunar processes, operative to a considerable depth in the moon, over a long period of time. These data, together with the low present internal temperatures of the moon, inferred from measurements of lunar electrical conductivity, impose severe constraints on acceptable thermal histories of the moon. Progress is being made toward understanding lunar surface properties by use of the effects of particle bombardment of the lunar surface (solar wind, solar flare particles, galactic cosmic rays). It has been shown that the rate of micrometeorite erosion is very low (angstroms per year) and that lunar rocks and soil have been within approximately a meter of the lunar surface for hundreds of millions of years. Future work will require sampling distinctly different regions of the moon in order to provide data concerning other important lunar events, such as the time of formation of the highland regions and of the mare basins, and of the extent to which lunar volcanism has persisted subsequent to the first third of lunar history. This work will require a sufficient number of Apollo landings, and any further cancellation of Apollo missions will jeopardize this unique opportunity to study the development of a planetary body from its beginning. Such a study is fundamental to our understanding of the earth and other planets.
Progressive Classification Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Kocurek, Michael
2009-01-01
An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2016-10-15
The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, T. S.; Babb, T.; Martinsson, P. G.
2015-06-16
Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
Multi-Model Validation in the Chesapeake Bay Region During Frontier Sentinel 2010
2012-09-28
which a 72-hr forecast took approximately 1 hr. Identical runs were performed on the DoD Supercomputing Resources Center (DSRC) host “ DaVinci ” at the...performance Navy DSRC host DaVinci . Products of water level and horizontal current maps as well as station time series, identical to those produced by the...forecast meteorological fields. The NCOM simulations were run daily on 128 CPUs at the Navy DSRC host DaVinci and required approximately 5 hrs of wall
Time domain convergence properties of Lyapunov stable penalty methods
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Sunkel, John
1991-01-01
Linear hyperbolic partial differential equations are analyzed using standard techniques to show that a sequence of solutions generated by the Liapunov stable penalty equations approaches the solution of the differential-algebraic equations governing the dynamics of multibody problems arising in linear vibrations. The analysis does not require that the system be conservative and does not impose any specific integration scheme. Variational statements are derived which bound the error in approximation by the norm of the constraint violation obtained in the approximate solutions.
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.
1979-01-01
The inhomogeneous Cauchy-Riemann equations in a rectangle are discretized by a finite difference approximation. Several different boundary conditions are treated explicitly, leading to algorithms which have overall second-order accuracy. All boundary conditions with either u or v prescribed along a side of the rectangle can be treated by similar methods. The algorithms presented here have nearly minimal time and storage requirements and seem suitable for development into a general-purpose direct Cauchy-Riemann solver for arbitrary boundary conditions.
A baseline maritime satellite communication system
NASA Technical Reports Server (NTRS)
Durrani, S. H.; Mcgregor, D. N.
1974-01-01
This paper describes a baseline system for maritime communications via satellite during the 1980s. The system model employs three geostationary satellites with global coverage antennas. Access to the system is controlled by a master station; user access is based on time-ordered polling or random access. Each Thor-Delta launched satellite has an RF power of 100 W (spinner) or 250 W (three-axis stabilized), and provides 10 equivalent duplex voice channels for up to 1500 ships with average waiting times of approximately 2.5 minutes. The satellite capacity is bounded by the available bandwidth to 50 such channels, which can serve up to 10,000 ships with an average waiting time of 5 minutes. The ships must have peak antenna gains of approximately 15.5 dB or 22.5 dB for the two cases (10 or 50 voice channels) when a spinner satellite is used; the required gains are 4 dB lower if a three-axis stabilized satellite is used. The ship antenna requirements can be reduced by 8 to 10 dB by employing a high-gain multi-beam phased array antenna on the satellite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hou-Dao; Yan, YiJing, E-mail: yyan@ust.hk; iChEM and Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026
2015-12-07
The issue of efficient hierarchy truncation is related to many approximate theories. In this paper, we revisit this issue from both the numerical efficiency and quantum mechanics prescription invariance aspects. The latter requires that the truncation approximation made in Schrödinger picture, such as the quantum master equations and their self–consistent–Born–approximation improvements, should be transferable to their Heisenberg–picture correspondences, without further approximations. We address this issue with the dissipaton equation of motion (DEOM), which is a unique theory for the dynamics of not only reduced systems but also hybrid bath environments. We also highlight the DEOM theory is not only aboutmore » how its dynamical variables evolve in time, but also the underlying dissipaton algebra. We demonstrate this unique feature of DEOM with model systems and report some intriguing nonlinear Fano interferences characteristics that are experimentally measurable.« less
Start-On-The-Part Transient Model for In-Situ Automated Tape Placement of Thermoplastic Composites
NASA Technical Reports Server (NTRS)
Costen, Robert c.; Marchello, Joseph M.
1997-01-01
Fabrication of a complex part by automated tape placement (ATP) can require starting up a new tape-end in the part interior, termed start-on-the-part. Careful thermal management of the starting transient is needed to achieve uniform crystallinity and inter-laminar weld strength - which is the objective of this modeling effort. The transient is modeled by a Fourier-Laplace transform solution of the time-dependent thermal transport equation in two spatial dimensions. The solution is subject to a quasi-steady approximation for the speed and length of the consolidation head. Sample calculations are done for the Langley ATP robot applying PEEK/carbon fiber composite and for two upgrades in robot performance. The head starts out almost at rest which meets an engineering requirement for accurate placement of the new tape-end. The head then rapidly accelerates until it reaches its steady state speed. This rapid acceleration, however, violates the quasi-steady approximation, so uniform weld strength and crystallinity during the starting transient are not actually achieved. The solution does give the elapsed time and distance from start-up to validity of the quasi-steady approximation - which quantifies the length of the non-uniform region. The elapsed time was always less than 0.1 s and the elapsed distance less than 1 cm. This quantification would allow the non-uniform region to be either trimmed away or compensated for in the design of a part. Such compensation would require experiments to measure the degree of non-uniformity, because the solution does not provide this information. The rapid acceleration suggests that the consolidation roller or belt be actively synchronized to avoid abrading the tape.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
NASA Astrophysics Data System (ADS)
Taoka, Hidekazu; Higuchi, Kenichi; Sawahashi, Mamoru
This paper presents experimental results in real propagation channel environments of real-time 1-Gbps packet transmission using antenna-dependent adaptive modulation and channel coding (AMC) with 4-by-4 MIMO multiplexing in the downlink Orthogonal Frequency Division Multiplexing (OFDM) radio access. In the experiment, Maximum Likelihood Detection employing QR decomposition and the M-algorithm (QRM-MLD) with adaptive selection of the surviving symbol replica candidates (ASESS) is employed to achieve such a high data rate at a lower received signal-to-interference plus background noise power ratio (SINR). The field experiments, which are conducted at the average moving speed of 30km/h, show that real-time packet transmission of greater than 1Gbps in a 100-MHz channel bandwidth (i.e., 10bits/second/Hz) is achieved at the average received SINR of approximately 13.5dB using 16QAM modulation and turbo coding with the coding rate of 8/9. Furthermore, we show that the measured throughput of greater than 1Gbps is achieved at the probability of approximately 98% in a measurement course, where the maximum distance from the cell site was approximately 300m with the respective transmitter and receiver antenna separation of 1.5m and 40cm with the total transmission power of 10W. The results also clarify that the minimum required receiver antenna spacing is approximately 10cm (1.5 carrier wave length) to suppress the loss in the required received SINR at 1-Gbps throughput to within 1dB compared to that assuming the fading correlation between antennas of zero both under non-line-of-sight (NLOS) and line-of-sight (LOS) conditions.
Control Aspects of Highly Constrained Guidance Techniques
1978-02-01
cycle. The advantages of this approach are (1) it requires only one time- consuming computation of the platform-to-body transformation matrix from...of steering gain corresponding to the three autopilot configurations, Kchange is KFCS change 2 0.0006 5 0.00156 8 0.00256 2.7 Terminal Steering As...a time- consuming process that it is desirable to consider ways of reducing the com- putation time by approximating the elements of B and/or updating
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2011 CFR
2011-01-01
... different horsepowers without duplication; (C) The basic models should be of different frame number series... be produced over a reasonable period of time (approximately 180 days), then each unit shall be tested... design may be substituted without requiring additional testing if the represented measures of energy...
Computing aerodynamic sound using advanced statistical turbulence theories
NASA Technical Reports Server (NTRS)
Hecht, A. M.; Teske, M. E.; Bilanin, A. J.
1981-01-01
It is noted that the calculation of turbulence-generated aerodynamic sound requires knowledge of the spatial and temporal variation of Q sub ij (xi sub k, tau), the two-point, two-time turbulent velocity correlations. A technique is presented to obtain an approximate form of these correlations based on closure of the Reynolds stress equations by modeling of higher order terms. The governing equations for Q sub ij are first developed for a general flow. The case of homogeneous, stationary turbulence in a unidirectional constant shear mean flow is then assumed. The required closure form for Q sub ij is selected which is capable of qualitatively reproducing experimentally observed behavior. This form contains separation time dependent scale factors as parameters and depends explicitly on spatial separation. The approximate forms of Q sub ij are used in the differential equations and integral moments are taken over the spatial domain. The velocity correlations are used in the Lighthill theory of aerodynamic sound by assuming normal joint probability.
Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle
NASA Astrophysics Data System (ADS)
Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.
2018-05-01
Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.
Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model
NASA Astrophysics Data System (ADS)
Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott
2017-08-01
One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Exoplanet Direct Imaging: Coronagraph Probe Mission Study EXO-C
NASA Technical Reports Server (NTRS)
Stapelfeldt, Karl R.
2013-01-01
Flagship mission for spectroscopy of ExoEarths is a long-term priority for space astrophysics (Astro2010). Requires 10(exp 10) contrast at 3 lambda/D separation, ( (is) greater than 10,000 times beyond HST performance) and large telescope (is) greater than 4m aperture. Big step. Mission for spectroscopy of giant planets and imaging of disks requires 10(exp 9) contrast at 3 lambda/D (already demonstrated in lab) and (is) approximately 1.5m telescope. Should be much more affordable, good intermediate step.Various PIs have proposed many versions of the latter mission 17 times since 1999; no unified approach.
Quiet Quincy Quarter. Teacher's Guide [and] Student Materials.
ERIC Educational Resources Information Center
Zishka, Phyllis
This document suggests learning activities, teaching methods, objectives, and evaluation measures for a second grade consumer education unit on quarters. The unit, which requires approximately six hours of class time, reinforces basic social studies and mathematics skills including following sequences of numbers, distinguishing left from right,…
Basic Skills Applications in Occupational Investigation.
ERIC Educational Resources Information Center
Hendrix, Mary
This guide contains 50 lesson plans for learning activities that incorporate basic skills into content areas of career education, mathematics, science, social studies, communications, and productive work habits. Each lesson consists of a purpose, basic skills applications, approximate time required, materials needed, things for the teacher to do…
DEVELOPMENT OF REAL-TIME FLARE COMBUSTION EFFICIENCY MONITOR - PHASE I
There are approximately 7,000 flares in operation at industrial facilities across the United States. Flares are one of the largest Volatile Organic Compounds (VOCs) and air toxics emissions sources. Based on a special emission inventory required by the Texas Commission on E...
78 FR 54722 - Reports, Forms and Record Keeping Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-05
... submission requesting confidential treatment. This estimate will vary based on the size of the submission, with smaller and voluntary submissions taking considerably less time to prepare. The agency based this... approximately 460 requests for confidential treatment annually. This figure is based on the average number of...
Abid, Abdulbasit
2013-03-01
This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.
Real-time automatic registration in optical surgical navigation
NASA Astrophysics Data System (ADS)
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming
2016-05-01
An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.
NASA Astrophysics Data System (ADS)
Kim, H. S.; Cho, J. H.; Shin, S. G.; Dong, K. R.; Chung, W. K.; Chung, J. E.
2013-01-01
This study evaluated possible actions that can help protect against and reduce radiation exposure by measuring the exposure dose for each type of isotope that is used frequently in nuclear medicine before performing numerical analysis of the effective half-life based on the measurement results. From July to August in 2010, the study targeted 10, 6 and 5 people who underwent an 18F-FDG (fludeoxyglucose) positron emission tomography (PET) scan, 99mTc-HDP bone scan, and 201Tl myocardial single-photon emission computed tomography (SPECT) scan, respectively, in the nuclear medicine department. After injecting the required medicine into the subjects, a survey meter was used to measure the dose depending on the distance from the heart and time elapsed. For the 18F-FDG PET scan, the dose decreased by approximately 66% at 90 min compared to that immediately after the injection and by 78% at a distance of 1 m compared to that at 0.3 m. In the 99mTc-HDP bone scan, the dose decreased by approximately 71% in 200 min compared to that immediately after the injection and by approximately 78% at a distance of 1 m compared to that at 0.3 m. In the 201Tl myocardial SPECT scan, the dose decreased by approximately 30% in 250 min compared to that immediately after the injection and by approximately 55% at a distance of 1 m compared to that at 0.3 m. In conclusion, the dose decreases by a large margin depending on the distance and time. In conclusion, this study measured the exposure doses by isotopes, distance from the heart and exposure time, and found that the doses were reduced significantly according the distance and the time.
Jia, Yali; An, Lin; Wang, Ruikang K
2010-01-01
We demonstrate for the first time that the detailed blood flow distribution within intracranial dura mater and cortex can be visualized by an ultrahigh sensitive optical microangiography (UHS-OMAG). The study uses an UHS-OMAG system operating at 1310 nm with an imaging speed at 150 frames per second that requires approximately 10 s to complete one 3-D scan of approximately 2.5 x 2.5 mm(2). The system is sensitive to blood flow with a velocity ranging from approximately 4 microms to approximately 23 mms. We show superior performance of UHS-OMAG in providing functional images of capillary level microcirculation within meninges in mice with the cranium left intact, the results of which correlate well with the standard dural histopathology.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Accelerated step-temperature aging of Al/x/Ga/1-x/As heterojunction laser diodes
NASA Technical Reports Server (NTRS)
Kressel, H.; Ettenberg, M.; Ladany, I.
1978-01-01
Double-heterojunction A2(0.3)Ga(0.7)As/Al(0.08)Ga(0.92)As lasers (oxide-striped and Al2O3 facet coated) were subjected to step-temperature aging from 60 to 100 C. The change in threshold current and spontaneous output was monitored at 22 C. The average time required for a 20% pulsed threshold current increase ranges from about 500 h, when operating at 100 C, to about 5000 h at a 70 C ambience. At 22 C, the extrapolated time is about 1 million h. The time needed for a 50% spontaneous emission reduction is of the same order of magnitude. The resulting activation energies are approximately 0.95 eV for laser degradation and approximately 1.1 eV for the spontaneous output decrease
Apparatus for and method of monitoring for breached fuel elements
Gross, Kenny C.; Strain, Robert V.
1983-01-01
This invention teaches improved apparatus for the method of detecting a breach in cladded fuel used in a nuclear reactor. The detector apparatus uses a separate bypass loop for conveying part of the reactor coolant away from the core, and at least three separate delayed-neutron detectors mounted proximate this detector loop. The detectors are spaced apart so that the coolant flow time from the core to each detector is different, and these differences are known. The delayed-neutron activity at the detectors is a function of the dealy time after the reaction in the fuel until the coolant carrying the delayed-neutron emitter passes the respective detector. This time delay is broken down into separate components including an isotopic holdup time required for the emitter to move through the fuel from the reaction to the coolant at the breach, and two transit times required for the emitter now in the coolant to flow from the breach to the detector loop and then via the loop to the detector. At least two of these time components are determined during calibrated operation of the reactor. Thereafter during normal reactor operation, repeated comparisons are made by the method of regression approximation of the third time component for the best-fit line correlating measured delayed-neutron activity against activity that is approximated according to specific equations. The equations use these time-delay components and known parameter values of the fuel and of the part and emitting daughter isotopes.
Nuclear reactor with internal thimble-type delayed neutron detection system
Gross, Kenny C.; Poloncsik, John; Lambert, John D. B.
1990-01-01
This invention teaches improved apparatus for the method of detecting a breach in cladded fuel used in a nuclear reactor. The detector apparatus is located in the primary heat exchanger which conveys part of the reactor coolant past at least three separate delayed-neutron detectors mounted in this heat exchanger. The detectors are spaced apart such that the coolant flow time from the core to each detector is different, and these differences are known. The delayed-neutron activity at the detectors is a function of the delay time after the reaction in the fuel until the coolant carrying the delayed-neutron emitter passes the respective detector. This time delay is broken down into separate components including an isotopic holdup time required for the emitter to move through the fuel from the reaction to the coolant at the breach, and two transit times required for the emitter now in the coolant to flow from the breach to the detector loop and then via the loop to the detector. At least two of these time components are determined during calibrated operation of the reactor. Thereafter during normal reactor operation, repeated comparisons are made by the method of regression approximation of the third time component for the best-fit line correlating measured delayed-neutron activity against activity that is approximated according to specific equations. The equations use these time-delay components and known parameter values of the fuel and of the part and emitting daughter isotopes.
Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.
Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J
2017-09-01
The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
THE USE OF THE LIGASURE™ DEVICE FOR SCROTAL ABLATION IN MARSUPIALS.
Cusack, Lara; Cutler, Daniel; Mayer, Joerg
2017-03-01
Five sugar gliders ( Petaurus breviceps ), ranging in age from 3 mo to 3.5 yr of age, and one opossum ( Didelphis virginianus ), age 4.5 mo, presented for elective orchiectomy and scrotal ablation. The LigaSure™ device was safely used for orchiectomy and scrotal ablation in both species. Surgical time with the LigaSure was approximately 4 sec. No grooming of the incision site or self-mutilation was seen in the first 72 hr postoperatively. One sugar glider required postoperative wound care approximately 10 days postoperatively following incision-site grooming by a conspecific. The LigaSure provides a rapid, technologically simple and safe surgical technique for scrotal ablation and orchiectomy in the marsupial patient that minimizes surgical, anesthetic, and recovery times.
USDA-ARS?s Scientific Manuscript database
The 2015 Dietary Guidelines Advisory Committee indicated magnesium was a shortfall nutrient that was underconsumed relative to the Estimated Average Requirement (EAR) for many Americans. Approximately 50% of Americans consume less than the EAR for magnesium, and some age groups consume substantially...
Path Planning For A Class Of Cutting Operations
NASA Astrophysics Data System (ADS)
Tavora, Jose
1989-03-01
Optimizing processing time in some contour-cutting operations requires solving the so-called no-load path problem. This problem is formulated and an approximate resolution method (based on heuristic search techniques) is described. Results for real-life instances (clothing layouts in the apparel industry) are presented and evaluated.
Accretion of low-metallicity gas by the Milky Way.
Wakker, B P; Howk, J C; Savage, B D; van Woerden, H; Tufte, S L; Schwarz, U J; Benjamin, R; Reynolds, R J; Peletier, R F; Kalberla, P M
1999-11-25
Models of the chemical evolution of the Milky Way suggest that the observed abundances of elements heavier than helium ('metals') require a continuous infall of gas with metallicity (metal abundance) about 0.1 times the solar value. An infall rate integrated over the entire disk of the Milky Way of approximately 1 solar mass per year can solve the 'G-dwarf problem'--the observational fact that the metallicities of most long-lived stars near the Sun lie in a relatively narrow range. This infall dilutes the enrichment arising from the production of heavy elements in stars, and thereby prevents the metallicity of the interstellar medium from increasing steadily with time. However, in other spiral galaxies, the low-metallicity gas needed to provide this infall has been observed only in associated dwarf galaxies and in the extreme outer disk of the Milky Way. In the distant Universe, low-metallicity hydrogen clouds (known as 'damped Ly alpha absorbers') are sometimes seen near galaxies. Here we report a metallicity of 0.09 times solar for a massive cloud that is falling into the disk of the Milky Way. The mass flow associated with this cloud represents an infall per unit area of about the theoretically expected rate, and approximately 0.1-0.2 times the amount required for the whole Galaxy.
Simulation of water-table aquifers using specified saturated thickness
Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.
2014-01-01
Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.
77 FR 60114 - Agency Information Collection Activities Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... approximately 100 entities on a daily basis. The recordkeeping requirement of section 22.5 is expected to apply to approximately 100 entities on an approximately annual basis. Based on experience with analogous... required by section 22.2(g) is expected to require about 100 hours annually per entity, for a total burden...
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy
NASA Astrophysics Data System (ADS)
Magen, Avner; Moharrami, Mohammad
This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of when approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a -approximation, where f is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the and problems. An curious fact we expose is that in the world of minor-free graph, the is harder in some sense than the.
Prospects for detecting oxygen, water, and chlorophyll on an exo-Earth
Brandt, Timothy D.; Spiegel, David S.
2014-01-01
The goal of finding and characterizing nearby Earth-like planets is driving many NASA high-contrast flagship mission concepts, the latest of which is known as the Advanced Technology Large-Aperture Space Telescope (ATLAST). In this article, we calculate the optimal spectral resolution R = λ/δλ and minimum signal-to-noise ratio per spectral bin (SNR), two central design requirements for a high-contrast space mission, to detect signatures of water, oxygen, and chlorophyll on an Earth twin. We first develop a minimally parametric model and demonstrate its ability to fit synthetic and observed Earth spectra; this allows us to measure the statistical evidence for each component’s presence. We find that water is the easiest to detect, requiring a resolution R ≳ 20, while the optimal resolution for oxygen is likely to be closer to R = 150, somewhat higher than the canonical value in the literature. At these resolutions, detecting oxygen will require approximately two times the SNR as water. Chlorophyll requires approximately six times the SNR as oxygen for an Earth twin, only falling to oxygen-like levels of detectability for a low cloud cover and/or a large vegetation covering fraction. This suggests designing a mission for sensitivity to oxygen and adopting a multitiered observing strategy, first targeting water, then oxygen on the more favorable planets, and finally chlorophyll on only the most promising worlds. PMID:25197095
Prospects for detecting oxygen, water, and chlorophyll on an exo-Earth.
Brandt, Timothy D; Spiegel, David S
2014-09-16
The goal of finding and characterizing nearby Earth-like planets is driving many NASA high-contrast flagship mission concepts, the latest of which is known as the Advanced Technology Large-Aperture Space Telescope (ATLAST). In this article, we calculate the optimal spectral resolution R = λ/δλ and minimum signal-to-noise ratio per spectral bin (SNR), two central design requirements for a high-contrast space mission, to detect signatures of water, oxygen, and chlorophyll on an Earth twin. We first develop a minimally parametric model and demonstrate its ability to fit synthetic and observed Earth spectra; this allows us to measure the statistical evidence for each component's presence. We find that water is the easiest to detect, requiring a resolution R ≳ 20, while the optimal resolution for oxygen is likely to be closer to R = 150, somewhat higher than the canonical value in the literature. At these resolutions, detecting oxygen will require approximately two times the SNR as water. Chlorophyll requires approximately six times the SNR as oxygen for an Earth twin, only falling to oxygen-like levels of detectability for a low cloud cover and/or a large vegetation covering fraction. This suggests designing a mission for sensitivity to oxygen and adopting a multitiered observing strategy, first targeting water, then oxygen on the more favorable planets, and finally chlorophyll on only the most promising worlds.
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, J.; Nguyen, D.C.; Sheffield, R.L.
1996-10-01
We present the results of theoretical and simulation studies of the design and performance of a new F type of FEL oscillator. This device, known by the acronym RAFEL for Regenerative Amplifier Free-Electron Laser, will be constructed in the space presently occupied by the AFEL (Advanced FEL) at Los Alamos, and will be driven by an upgraded (to higher average power) version of the present AFEL linac. In order to achieve a long-time-averaged optical output power of {approximately} 1 kW using an electron beam with an average power of {approximately} 20 kW, a rather high extraction efficiency {eta} {approximately} 5%more » is required. We have designed a 2-m-long undulator to attain this goal: the first meter is untapered and provides high gain while the second meter is linearly-tapered in magnetic field amplitude to provide high extraction efficiency in the standard K-M-R manner. Two-plane focusing and linear polarization of the undulator are assumed. Electron-beam properties from PARMEIA simulations of the AFEL accelerator were used in the design. A large saturated gain, {approximately} 500, requires a very small optical feedback to keep the device operating at steady-state. However, the large gain leads to distorted optical modes which require two- and three-dimensional simulations to adequately treat diffraction effects. This FEL will be driven by 17 MeV electrons and will operate in the 16 {mu}m spectral region.« less
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Wireless amperometric neurochemical monitoring using an integrated telemetry circuit.
Roham, Masoud; Halpern, Jeffrey M; Martin, Heidi B; Chiel, Hillel J; Mohseni, Pedram
2008-11-01
An integrated circuit for wireless real-time monitoring of neurochemical activity in the nervous system is described. The chip is capable of conducting high-resolution amperometric measurements in four settings of the input current. The chip architecture includes a first-order Delta Sigma modulator (Delta Sigma M) and a frequency-shift-keyed (FSK) voltage-controlled oscillator (VCO) operating near 433 MHz. It is fabricated using the AMI 0.5 microm double-poly triple-metal n-well CMOS process, and requires only one off-chip component for operation. Measured dc current resolutions of approximately 250 fA, approximately 1.5 pA, approximately 4.5 pA, and approximately 17 pA were achieved for input currents in the range of +/-5, +/-37, +/-150, and +/-600 nA, respectively. The chip has been interfaced with a diamond-coated, quartz-insulated, microneedle, tungsten electrode, and successfully recorded dopamine concentration levels as low as 0.5 microM wirelessly over a transmission distance of approximately 0.5 m in flow injection analysis experiments.
Application of Approximate Unsteady Aerodynamics for Flutter Analysis
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2010-01-01
A technique for approximating the modal aerodynamic influence coefficient (AIC) matrices by using basis functions has been developed. A process for using the resulting approximated modal AIC matrix in aeroelastic analysis has also been developed. The method requires the unsteady aerodynamics in frequency domain, and this methodology can be applied to the unsteady subsonic, transonic, and supersonic aerodynamics. The flutter solution can be found by the classic methods, such as rational function approximation, k, p-k, p, root locus et cetera. The unsteady aeroelastic analysis using unsteady subsonic aerodynamic approximation is demonstrated herein. The technique presented is shown to offer consistent flutter speed prediction on an aerostructures test wing (ATW) 2 and a hybrid wing body (HWB) type of vehicle configuration with negligible loss in precision. This method computes AICs that are functions of the changing parameters being studied and are generated within minutes of CPU time instead of hours. These results may have practical application in parametric flutter analyses as well as more efficient multidisciplinary design and optimization studies.
NE VIII lambda 774 and time variable associated absorption in the QSO UM 675
NASA Technical Reports Server (NTRS)
Hamann, Fred; Barlow, Thomas A.; Beaver, E. A.; Burbidge, E. M.; Cohen, Ross D.; Junkkarinen, Vesa; Lyons, R.
1995-01-01
We discuss measurements of Ne VIII lambda 774 absorption and the time variability of other lines in the z(sub a) approximately equal z(sub e) absorption system of the z(sub e) = 2.15 QSO UM 675 (0150-203). The C IV lambda 1549 and N V 1240 doublets at z(sub a) = 2.1340 (shifted approximately 1500 km/s from z(sub e) strengthened by a factor of approximately 3 between observations by Sargent, Boksenberg and Steidel (1981 November) and our earliest measurements (1990 November and December). We have no information on changes in other z(sub a) approximately equal z(sub e) absorption lines. Continued monitoring since 1990 November shows no clear changes in any of the absorptions between approximately 1100 and 1640 A rest. The short timescale of the variability (less than or approximately equal to 2.9 yr rest) strongly suggests that the clouds are dense, compact, close to the QSO, and photoionized by the QSO continuum. If the line variability is caused by changes in the ionization, the timescale requires densities greater than approximately 4000/cu cm. Photoionization calculations place the absorbing clouds within approximately 200 pc of the continuum source. The full range of line ionizations (from Ne VIII lambda 774 to C III lambda 977) in optically thin gas (no Lyman limit) implies that the absorbing regions span a factor of more than approximately 10 in distance or approximately 100 in density. Across these regions, the total hydrogen (H I + H II) column ranges from a few times 10(exp 18)/sq cm in the low-ionization gas to approximately 10(exp 20)/sq cm where the Ne VIII doublet forms. The metallicity is roughly solar or higher, with nitrogen possibly more enhanced by factors of a few. The clouds might contribute significant line emission if they nearly envelop the QSO. The presence of highly ionized Ne VIII lambda 774 absorption near the QSO supports recent studies that link z(sub a) approximately equal to z(sub e) systems with X-ray 'wamr absorbers. We show that the Ne VIII absorbing gas would itself produce measurable warm absorption -- characterized by bound-free O VII or O VIII edegs near 0.8 keV -- if the column densities were N(sub H) greater than or approximately equal to 10(exp 21)/sq cm (for solar abundances).
NE VIII lambda 774 and time variable associated absorption in the QSO UM 675
NASA Astrophysics Data System (ADS)
Hamann, Fred; Barlow, Thomas A.; Beaver, E. A.; Burbidge, E. M.; Cohen, Ross D.; Junkkarinen, Vesa; Lyons, R.
1995-04-01
We discuss measurements of Ne VIII lambda 774 absorption and the time variability of other lines in the za approximately equal ze absorption system of the ze = 2.15 QSO UM 675 (0150-203). The C IV lambda 1549 and N V 1240 doublets at za = 2.1340 (shifted approximately 1500 km/s from ze strengthened by a factor of approximately 3 between observations by Sargent, Boksenberg and Steidel (1981 November) and our earliest measurements (1990 November and December). We have no information on changes in other za approximately equal ze absorption lines. Continued monitoring since 1990 November shows no clear changes in any of the absorptions between approximately 1100 and 1640 A rest. The short timescale of the variability (less than or approximately equal to 2.9 yr rest) strongly suggests that the clouds are dense, compact, close to the QSO, and photoionized by the QSO continuum. If the line variability is caused by changes in the ionization, the timescale requires densities greater than approximately 4000/cu cm. Photoionization calculations place the absorbing clouds within approximately 200 pc of the continuum source. The full range of line ionizations (from Ne VIII lambda 774 to C III lambda 977) in optically thin gas (no Lyman limit) implies that the absorbing regions span a factor of more than approximately 10 in distance or approximately 100 in density. Across these regions, the total hydrogen (H I + H II) column ranges from a few times 1018/sq cm in the low-ionization gas to approximately 1020/sq cm where the Ne VIII doublet forms. The metallicity is roughly solar or higher, with nitrogen possibly more enhanced by factors of a few. The clouds might contribute significant line emission if they nearly envelop the QSO. The presence of highly ionized Ne VIII lambda 774 absorption near the QSO supports recent studies that link za approximately equal to ze systems with X-ray 'wamr absorbers. We show that the Ne VIII absorbing gas would itself produce measurable warm absorption -- characterized by bound-free O VII or O VIII edegs near 0.8 keV -- if the column densities were NH greater than or approximately equal to 1021/sq cm (for solar abundances).
Kunisawa, Takayuki; Fujimoto, Kazuhiro; Kurosawa, Atsushi; Nagashima, Michio; Matsui, Koji; Hayashi, Dai; Yamamoto, Kunihiko; Goto, Yuya; Akutsu, Hiroaki; Iwasaki, Hiroshi
2014-01-01
Purpose The general dexmedetomidine (DEX) concentration required for sedation of intensive care unit patients is considered to be approximately 0.7 ng/mL. However, higher DEX concentrations are considered to be required for sedation and/or pain management after major surgery using remifentanil. We determined the DEX concentration required after major surgery by using a target-controlled infusion (TCI) system for DEX. Methods Fourteen patients undergoing surgery for abdominal aortic aneurysms (AAA) were randomly, double-blindly assigned to two groups and underwent fentanyl- or remifentanil-based anesthetic management. DEX TCI was started at the time of closing the peritoneum and continued for 12 hours after stopping propofol administration (M0); DEX TCI was adjusted according to the sedation score and complaints of pain. The doses and concentrations of all anesthetics and postoperative conditions were investigated. Results Throughout the observation period, the predicted plasma concentration of DEX in the fentanyl group was stable at approximately 0.7 ng/mL. In contrast, the predicted plasma concentration of DEX in the remifentanil group rapidly increased and stabilized at approximately 2 ng/mL. The actual DEX concentration at 540 minutes after M0 showed a similar trend (0.54±0.14 [fentanyl] versus 1.57±0.39 ng/mL [remifentanil]). In the remifentanil group, the dopamine dose required and the duration of intubation decreased, and urine output increased; however, no other outcomes improved. Conclusion The DEX concentration required after AAA surgery with remifentanil was three-fold higher than that required after AAA surgery with fentanyl or the conventional DEX concentration for sedation. High DEX concentration after remifentanil affords some benefits in anesthetic management. PMID:25328395
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Novel priming and crosslinking systems for use with isocyanatomethacrylate dental adhesives.
Chappelow, C C; Power, M D; Bowles, C Q; Miller, R G; Pinzino, C S; Eick, J D
2000-11-01
(a) to design, formulate and evaluate prototype primers and a crosslinking agent for use with isocyanatomethacrylate-based comonomer adhesives and (b) to establish correlations between bond strength and solubility parameter differences between the adhesives and etched dentin, and the permeability coefficients of the adhesives. Equimolar mixtures of 2-isocyanatoethyl methacrylate (IEM) and a methacrylate comonomer were formulated with tri-n-butyl borane oxide (TBBO) as the free radical initiator to have cure times of 6-10 min. Shear bond strengths to dentin were determined for each adhesive mixture (n = 7) using standard testing protocols. Shear bond strengths for the three systems were also determined after application of "reactive primers" to the dentin surface. The "reactive primers" contained 10-20 parts by weight of the respective comonomer mixture and 3.5 parts by weight TBBO in acetone. Solubility parameters difference values (delta delta) and permeability coefficients (P) were approximated for each adhesive system and correlated with shear bond strength values. Additionally, a crosslinking agent was prepared by bulk reaction of an equimolar mixture containing IEM and a methacrylate comonomer. The effects of crosslinker addition on: (a) the setting time of IEM; and (b) the setting times and initiator requirements of selected IEM/comonomer mixtures were determined. Shear bond strength values (MPa): IEM/HEMA 13.6 +/- 2.0 (no primer), 20.1 +/- 2.0 (with primer); IEM/HETMA 9.3 +/- 3.3 (no primer), 20.8 +/- 8.1 (with primer); IEM/AAEMA 13.6 +/- 1.9 (no primer), 17.3 +/- 3.2 (with primer). Also, approximated permeability coefficients showed a significant correlation (r = +0.867, p < 0.001) with shear bond strength values. Crosslinker addition studies with IEM/4-META: (a) at 5-9 mol% reduced the setting time of IEM polymerization by 79%; and (b) at 6 mol% reduced initiator level requirements 60-70% to achieve a comparable setting time, and decreased setting times by ca. 75% for a given initiator level with selected IEM/methacrylate adhesive systems. The shear bond strengths of isocyanatomethacrylate-based dental adhesives can be enhanced by using reactive primers; their setting times and initiator requirements can be improved using a dimethacrylate crosslinker. Approximated permeability coefficients may be useful as indicators of bonding performance for dentin adhesive systems.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
NASA Technical Reports Server (NTRS)
Warming, Robert F.; Beam, Richard M.
1988-01-01
Spatially discrete difference approximations for hyperbolic initial-boundary-value problems (IBVPs) require numerical boundary conditions in addition to the analytical boundary conditions specified for the differential equations. Improper treatment of a numerical boundary condition can cause instability of the discrete IBVP even though the approximation is stable for the pure initial-value or Cauchy problem. In the discrete IBVP stability literature there exists a small class of discrete approximations called borderline cases. For nondissipative approximations, borderline cases are unstable according to the theory of the Gustafsson, Kreiss, and Sundstrom (GKS) but they may be Lax-Richtmyer stable or unstable in the L sub 2 norm on a finite domain. It is shown that borderline approximation can be characterized by the presence of a stationary mode for the finite-domain problem. A stationary mode has the property that it does not decay with time and a nontrivial stationary mode leads to algebraic growth of the solution norm with mesh refinement. An analytical condition is given which makes it easy to detect a stationary mode; several examples of numerical boundary conditions are investigated corresponding to borderline cases.
NASA Astrophysics Data System (ADS)
Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John
2001-01-01
For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.
A forecast-based STDP rule suitable for neuromorphic implementation.
Davies, S; Galluppi, F; Rast, A D; Furber, S B
2012-08-01
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that the STDP TTS algorithm allows the neural network to adapt and detect the incoming pattern with improvements both in the reliability of, and the time required for, consistent output. Through the approximations we suggest in this paper, we introduce a learning rule that is easy to implement both in event-driven simulators and in dedicated hardware, reducing computational complexity relative to the standard STDP rule. Such a rule offers a promising solution, complementary to standard STDP evaluation algorithms, for real-time learning using spiking neural networks in time-critical applications. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
NASA Astrophysics Data System (ADS)
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Matthaeus, William H.
1990-01-01
A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
A hybrid continuous-discrete method for stochastic reaction-diffusion processes.
Lo, Wing-Cheong; Zheng, Likun; Nie, Qing
2016-09-01
Stochastic fluctuations in reaction-diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Taylor, K.R.; James, R.W.; Helinsky, B.M.
1986-01-01
Two traveltime and dispersion measurements using rhodamin dye were conducted on a 178-mile reach of the Shenandoah River between Waynesboro, Virginia, and Harpers Ferry, West Virginia. The flows during the two measurements were at approximately the 85% and 45% flow durations. The two sets of data were used to develop a generalized procedure for predicting traveltimes and downstream concentrations resulting from spillage of water soluble substances at any point along the river reach studied. The procedure can be used to calculate traveltime and concentration data for almost any spillage that occurs during relatively steady flow between a 40% to 95% flow duration. Based on an analogy between the general shape of a time concentration curve and a scalene triangle, the procedures can be used on long river reaches to approximate the conservative time concentration curve for instantaneous spills of contaminants. The triangular approximation technique can be combined with a superposition technique to predict the approximate, conservative time concentration curve for constant rate and variable rate injections of contaminants. The procedure was applied to a hypothetical situation in which 5,000 pounds of contaminants is spilled instantaneously at Island Ford, Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach the water intake at Front Royal, Virginia (85 miles downstream), are 234,280, and 340 hrs, respectively, for a flow at an 80% flow duration. The conservative peak concentration would be approximately 940 micrograms/L at Front Royal. The procedures developed cannot be depended upon when a significant hydraulic wave or other unsteady flow condition exists in the flow system or when the spilled material floats or is immiscible in water. (Author 's abstract)
OPTRAN- OPTIMAL LOW THRUST ORBIT TRANSFERS
NASA Technical Reports Server (NTRS)
Breakwell, J. V.
1994-01-01
OPTRAN is a collection of programs that solve the problem of optimal low thrust orbit transfers between non-coplanar circular orbits for spacecraft with chemical propulsion systems. The programs are set up to find Hohmann-type solutions, with burns near the perigee and apogee of the transfer orbit. They will solve both fairly long burn-arc transfers and "divided-burn" transfers. Program modeling includes a spherical earth gravity model and propulsion system models for either constant thrust or constant acceleration. The solutions obtained are optimal with respect to fuel use: i.e., final mass of the spacecraft is maximized with respect to the controls. The controls are the direction of thrust and the thrust on/off times. Two basic types of programs are provided in OPTRAN. The first type is for "exact solution" which results in complete, exact tkme-histories. The exact spacecraft position, velocity, and optimal thrust direction are given throughout the maneuver, as are the optimal thrust switch points, the transfer time, and the fuel costs. Exact solution programs are provided in two versions for non-coplanar transfers and in a fast version for coplanar transfers. The second basic type is for "approximate solutions" which results in approximate information on the transfer time and fuel costs. The approximate solution is used to estimate initial conditions for the exact solution. It can be used in divided-burn transfers to find the best number of burns with respect to time. The approximate solution is useful by itself in relatively efficient, short burn-arc transfers. These programs are written in FORTRAN 77 for batch execution and have been implemented on a DEC VAX series computer with the largest program having a central memory requirement of approximately 54K of 8 bit bytes. The OPTRAN program were developed in 1983.
76 FR 15009 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-18
...-hour work-year multiplied by 2.93 to account for bonuses, firm size, employee benefits, and overhead... (``broker-dealers'') to preserve for prescribed periods of time certain records required to be made by Rule... in the area, the staff estimates that the average broker-dealer spends approximately $5,000 each year...
XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme
NASA Astrophysics Data System (ADS)
Chebotko, Artem; Fu, Bin
Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.
An alternative approach for computing seismic response with accidental eccentricity
NASA Astrophysics Data System (ADS)
Fan, Xuanhua; Yin, Jiacong; Sun, Shuli; Chen, Pu
2014-09-01
Accidental eccentricity is a non-standard assumption for seismic design of tall buildings. Taking it into consideration requires reanalysis of seismic resistance, which requires either time consuming computation of natural vibration of eccentric structures or finding a static displacement solution by applying an approximated equivalent torsional moment for each eccentric case. This study proposes an alternative modal response spectrum analysis (MRSA) approach to calculate seismic responses with accidental eccentricity. The proposed approach, called the Rayleigh Ritz Projection-MRSA (RRP-MRSA), is developed based on MRSA and two strategies: (a) a RRP method to obtain a fast calculation of approximate modes of eccentric structures; and (b) an approach to assemble mass matrices of eccentric structures. The efficiency of RRP-MRSA is tested via engineering examples and compared with the standard MRSA (ST-MRSA) and one approximate method, i.e., the equivalent torsional moment hybrid MRSA (ETM-MRSA). Numerical results show that RRP-MRSA not only achieves almost the same precision as ST-MRSA, and is much better than ETM-MRSA, but is also more economical. Thus, RRP-MRSA can be in place of current accidental eccentricity computations in seismic design.
Bitwise efficiency in chaotic models
Düben, Peter; Palmer, Tim
2017-01-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz’s prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit ‘double’ floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model. PMID:28989303
Bitwise efficiency in chaotic models
NASA Astrophysics Data System (ADS)
Jeffress, Stephen; Düben, Peter; Palmer, Tim
2017-09-01
Motivated by the increasing energy consumption of supercomputing for weather and climate simulations, we introduce a framework for investigating the bit-level information efficiency of chaotic models. In comparison with previous explorations of inexactness in climate modelling, the proposed and tested information metric has three specific advantages: (i) it requires only a single high-precision time series; (ii) information does not grow indefinitely for decreasing time step; and (iii) information is more sensitive to the dynamics and uncertainties of the model rather than to the implementation details. We demonstrate the notion of bit-level information efficiency in two of Edward Lorenz's prototypical chaotic models: Lorenz 1963 (L63) and Lorenz 1996 (L96). Although L63 is typically integrated in 64-bit `double' floating point precision, we show that only 16 bits have significant information content, given an initial condition uncertainty of approximately 1% of the size of the attractor. This result is sensitive to the size of the uncertainty but not to the time step of the model. We then apply the metric to the L96 model and find that a 16-bit scaled integer model would suffice given the uncertainty of the unresolved sub-grid-scale dynamics. We then show that, by dedicating computational resources to spatial resolution rather than numeric precision in a field programmable gate array (FPGA), we see up to 28.6% improvement in forecast accuracy, an approximately fivefold reduction in the number of logical computing elements required and an approximately 10-fold reduction in energy consumed by the FPGA, for the L96 model.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation
NASA Technical Reports Server (NTRS)
Clifton, C.; Homaifax, A.; Bikdash, M.
1997-01-01
This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.
Nuclear shell model code CRUNCHER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resler, D.A.; Grimes, S.M.
1988-05-01
A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.
Radio Frequency Identification for Space Habitat Inventory and Stowage Allocation Management
NASA Technical Reports Server (NTRS)
Wagner, Carole Y.
2015-01-01
To date, the most extensive space-based inventory management operation has been the International Space Station (ISS). Approximately 20,000 items are tracked with the Inventory Management System (IMS) software application that requires both flight and ground crews to update the database daily. This audit process is manually intensive and laborious, requiring the crew to open cargo transfer bags (CTBs), then Ziplock bags therein, to retrieve individual items. This inventory process contributes greatly to the time allocated for general crew tasks.
Lovelock, D Michael; Messineo, Alessandra P; Cox, Brett W; Kollmeier, Marisa A; Zelefsky, Michael J
2015-03-01
To compare the potential benefits of continuous monitoring of prostate position and intervention (CMI) using 2-mm displacement thresholds during stereotactic body radiation therapy (SBRT) treatment to those of a conventional image-guided procedure involving single localization prior to treatment. Eighty-nine patients accrued to a prostate SBRT dose escalation protocol were implanted with radiofrequency transponder beacons. The planning target volume (PTV) margin was 5 mm in all directions, except for 3 mm in the posterior direction. The prostate was kept within 2 mm of its planned position by the therapists halting dose delivery and, if necessary, correcting the couch position. We computed the number, type, and time required for interventions and where the prostate would have been during dose delivery had there been, instead, a single image-guided setup procedure prior to each treatment. Distributions of prostate displacements were computed as a function of time. After the initial setup, 1.7 interventions per fraction were required, with a concomitant increase in time for dose delivery of approximately 65 seconds. Small systematic drifts in prostate position in the posterior and inferior directions were observed in the study patients. Without CMI, intrafractional motion would have resulted in approximately 10% of patients having a delivered dose that did not meet our clinical coverage requirement, that is, a PTV D95 of >90%. The posterior PTV margin required for 95% of the dose to be delivered with the target positioned within the PTV was computed as a function of time. The margin necessary was found to increase by 2 mm every 5 minutes, starting from the time of the imaging procedure. CMI using a tight 2-mm displacement threshold was not only feasible but was found to deliver superior PTV coverage compared with the conventional image-guided procedure in the SBRT setting. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovelock, D. Michael, E-mail: lovelocm@mskcc.org; Messineo, Alessandra P.; Cox, Brett W.
2015-03-01
Purpose: To compare the potential benefits of continuous monitoring of prostate position and intervention (CMI) using 2-mm displacement thresholds during stereotactic body radiation therapy (SBRT) treatment to those of a conventional image-guided procedure involving single localization prior to treatment. Methods and Materials: Eighty-nine patients accrued to a prostate SBRT dose escalation protocol were implanted with radiofrequency transponder beacons. The planning target volume (PTV) margin was 5 mm in all directions, except for 3 mm in the posterior direction. The prostate was kept within 2 mm of its planned position by the therapists halting dose delivery and, if necessary, correcting themore » couch position. We computed the number, type, and time required for interventions and where the prostate would have been during dose delivery had there been, instead, a single image-guided setup procedure prior to each treatment. Distributions of prostate displacements were computed as a function of time. Results: After the initial setup, 1.7 interventions per fraction were required, with a concomitant increase in time for dose delivery of approximately 65 seconds. Small systematic drifts in prostate position in the posterior and inferior directions were observed in the study patients. Without CMI, intrafractional motion would have resulted in approximately 10% of patients having a delivered dose that did not meet our clinical coverage requirement, that is, a PTV D95 of >90%. The posterior PTV margin required for 95% of the dose to be delivered with the target positioned within the PTV was computed as a function of time. The margin necessary was found to increase by 2 mm every 5 minutes, starting from the time of the imaging procedure. Conclusions: CMI using a tight 2-mm displacement threshold was not only feasible but was found to deliver superior PTV coverage compared with the conventional image-guided procedure in the SBRT setting.« less
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Easley, Christopher J; Rocheleau, Jonathan V; Head, W Steven; Piston, David W
2009-11-01
We assayed glucose-stimulated insulin secretion (GSIS) from live, murine islets of Langerhans in microfluidic devices by the downstream formation of aqueous droplets. Zinc ions, which are cosecreted with insulin from beta-cells, were quantitatively measured from single islets with high temporal resolution using a fluorescent indicator, FluoZin-3. Real-time storage of secretions into droplets (volume of 0.470 +/- 0.009 nL) effectively preserves the temporal chemical information, allowing reconstruction of the secretory time record. The use of passive flow control within the device removes the need for syringe pumps, requiring only a single hand-held syringe. Under stimulatory glucose levels (11 mM), bursts of zinc as high as approximately 800 fg islet(-1) min(-1) were measured. Treatment with diazoxide effectively blocked zinc secretion, as expected. High temporal resolution reveals two major classes of oscillations in secreted zinc, with predominate periods at approximately 20-40 s and approximately 5-10 min. The more rapid oscillation periods match closely with those of intraislet calcium oscillations, while the slower oscillations are consistent with insulin pulses typically measured in bulk islet experiments or in the bloodstream. This droplet sampling technique should be widely applicable to time-resolved cellular secretion measurements, either in real-time or for postprocessing.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-28
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
The desorptivity model of bulk soil-water evaporation
NASA Technical Reports Server (NTRS)
Clapp, R. B.
1983-01-01
Available models of bulk evaporation from a bare-surfaced soil are difficult to apply to field conditions where evaporation is complicated by two main factors: rate-limiting climatic conditions and redistribution of soil moisture following infiltration. Both factors are included in the "desorptivity model', wherein the evaporation rate during the second stage (the soil-limiting stage) of evaporation is related to the desorptivity parameter, A. Analytical approximations for A are presented. The approximations are independent of the surface soil moisture. However, calculations using the approximations indicate that both soil texture and soil moisture content at depth significantly affect A. Because the moisture content at depth decreases in time during redistribution, it follows that the A parameter also changes with time. Consequently, a method to calculate a representative value of A was developed. When applied to field data, the desorptivity model estimated cumulative evaporation well. The model is easy to calculate, but its usefulness is limited because it requires an independent estimate of the time of transition between the first and second stages of evaporation. The model shows that bulk evaporation after the transition to the second stage is largely independent of climatic conditions.
Morita, Yasuyuki; Yamashita, Takahiro; Toku, Toku; Ju, Yang
2018-01-01
There is a need for efficient stem cell-to-tenocyte differentiation techniques for tendon tissue engineering. More than 1 week is required for tenogenic differentiation with chemical stimuli, including co-culturing. Research has begun to examine the utility of mechanical stimuli, which reduces the differentiation time to several days. However, the precise length of time required to differentiate human bone marrow-derived mesenchymal stem cells (hBMSCs) into tenocytes has not been clarified. Understanding the precise time required is important for future tissue engineering projects. Therefore, in this study, a method was developed to more precisely determine the length of time required to differentiate hBMSCs into tenocytes with cyclic stretching stimulus. First, it had to be determined how stretching stimulation affected the cells. Microgrooved culture membranes were used to suppress cell orientation behavior. Then, only cells oriented parallel to the microgrooves were selected and evaluated for protein synthesis levels for differentiation. The results revealed that growing cells on the microgrooved membrane and selecting optimally-oriented cells for measurement improved the accuracy of the differentiation evaluation, and that hBMSCs differentiated into tenocytes in approximately 10 h. The differentiation time corresponded to the time required for cellular cytoskeleton reorganization and cellular morphology alterations. This suggests that cells, when subjected to mechanical stimulus, secrete mRNAs and proteins for both cytoskeleton reorganization and differentiation.
Designing a Multi-Petabyte Database for LSST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei
2007-01-10
The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are beingmore » evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S; Gulam, M; Song, K
2014-06-01
Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less
Modulation and synchronization technique for MF-TDMA system
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Sayegh, Soheil
1994-01-01
This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
NASA Astrophysics Data System (ADS)
Kim, Jinsol; Shusterman, Alexis A.; Lieschke, Kaitlyn J.; Newman, Catherine; Cohen, Ronald C.
2018-04-01
The newest generation of air quality sensors is small, low cost, and easy to deploy. These sensors are an attractive option for developing dense observation networks in support of regulatory activities and scientific research. They are also of interest for use by individuals to characterize their home environment and for citizen science. However, these sensors are difficult to interpret. Although some have an approximately linear response to the target analyte, that response may vary with time, temperature, and/or humidity, and the cross-sensitivity to non-target analytes can be large enough to be confounding. Standard approaches to calibration that are sufficient to account for these variations require a quantity of equipment and labor that negates the attractiveness of the sensors' low cost. Here we describe a novel calibration strategy for a set of sensors, including CO, NO, NO2, and O3, that makes use of (1) multiple co-located sensors, (2) a priori knowledge about the chemistry of NO, NO2, and O3, (3) an estimate of mean emission factors for CO, and (4) the global background of CO. The strategy requires one or more well calibrated anchor points within the network domain, but it does not require direct calibration of any of the individual low-cost sensors. The procedure nonetheless accounts for temperature and drift, in both the sensitivity and zero offset. We demonstrate this calibration on a subset of the sensors comprising BEACO2N, a distributed network of approximately 50 sensor nodes
, each measuring CO2, CO, NO, NO2, O3 and particulate matter at 10 s time resolution and approximately 2 km spacing within the San Francisco Bay Area.
The impact of preventable disruption on the operative time for minimally invasive surgery.
Al-Hakim, Latif
2011-10-01
Current ergonomic studies show that disruption exposes surgical teams to stress and musculoskeletal disorders. This study considers minimally invasive surgery as a sociotechnical process subjected to a variety of disruption events other than those recognized by ergonomic science. The research takes into consideration the impact of preventable disruption on operating time rather than on the physical and emotional status of the surgical team. Events inside operating rooms that disturbed operative time were recorded for 17 minimally invasive surgeries. The disruption events were classified into four main areas: prerequisite requirements, work design, communication during surgery, and other. Each area was further classified according to sources of disruption. Altogether, 11 sources of disruption were identified: patient record, protocol and policy, surgical requirements and surgeon preferences, operating table and patient positioning, arrangement of instruments, lighting, monitor, clothing, surgical teamwork, coordination, and other. Disruption prolonged operative time by more than 32%. Teamwork forms the main source of disruption followed by operating table and patient positioning and arrangement of instruments. These three sources represented approximately 20% of operative time. Failure to follow principles of work design had a significant negative impact, lengthening operative time by approximately 15%. Although lighting and monitors had a relatively small impact on operative time, these factors could create inconvenience and stress within the surgical teams. In addition, the effect of failure to follow surgical protocols and policies or having incomplete patient records may have a limited effect on operative time but could have serious consequences. This report demonstrates that preventable disruption caused an increase in operative time and forced surgeons and patients to endure unnecessary delay of more than 32%. Such additional time could be used to deal with the pressure of emergency cases and to reduce waiting lists for elective surgery.
Validity of the Born approximation for beyond Gaussian weak lensing observables
Petri, Andrea; Haiman, Zoltan; May, Morgan
2017-06-06
Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N-body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show thatmore » deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O(Φ 3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O(Φ 4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5σ. Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.« less
Validity of the Born approximation for beyond Gaussian weak lensing observables
NASA Astrophysics Data System (ADS)
Petri, Andrea; Haiman, Zoltán; May, Morgan
2017-06-01
Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N -body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show that deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O (Φ3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O (Φ4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5 σ . Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.
NASA Technical Reports Server (NTRS)
Madejski, G.; Zycki, P.; Done, C.; Valinia, A.; Blanco, P.; Rothschild, R.; Turek, B.
2000-01-01
NGC 4945 is one of the brightest Se.yfert galaxies on the sky at 100 keV, but is completely absorbed below 10 keV, implying an optical depth of the absorber to electron scattering of a few; its absorption column is probably the largest which still allows a direct view of the nucleus at hard X-ray energies. Our observations of it with the Rossi X-ray Timing Explorer (RXTE) satellite confirm the large absorption, which for a simple phenomenological fit using an absorber with Solar abundances implies a column of 4.5(sup 0.4, sub -0.4) x 10(exp 24) /sq cm. Using a a more realistic scenario (requiring Monte Carlo modeling of the scattering), we infer the optical depth to Thomson scattering of approximately 2.4. If such a scattering medium were to subtend a large solid angle from the nucleus, it should smear out any intrinsic hard X-ray variability on time scales shorter than the light travel time through it. The rapid (with a time scale of approximately a day) hard X-ray variability of NGC 4945 we observed with the RXTE implies that the bulk of the extreme absorption in this object does not originate in a parsec-size, geometrically thick molecular torus. Limits on the amount of scattered flux require that the optically thick material on parsec scales must be rather geometrically thin, subtending a half-angle < 10 deg. This is only marginally consistent with the recent determinations of the obscuring column in hard X-rays, where only a quarter of Seyfert 2s have columns which are optically thick, and presents a problem in accounting for the Cosmic X-ray Background primarily with AGN possessing the geometry as that inferred by us. The small solid angle of the obscuring material, together with the black hole mass (of approximately 1.4 x 10(exp 6) solar mass) from megamaser measurements. allows a robust determination of the source luminosity, which in turn implies that the source radiates at approximately 10% of the Eddington limit.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows
NASA Technical Reports Server (NTRS)
Boretti, A. A.
1990-01-01
Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.
McWilliams, Scott R.; Karasov, William H.
2014-01-01
Flexible phenotypes enable animals to live in environments that change over space and time, and knowing the limits to and the required time scale for this flexibility provides insights into constraints on energy and nutrient intake, diet diversity and niche width. We quantified the level of immediate and ultimate spare capacity, and thus the extent of phenotypic flexibility, in the digestive system of a migratory bird in response to increased energy demand, and identified the digestive constraints responsible for the limits on sustained energy intake. Immediate spare capacity decreased from approximately 50% for birds acclimated to relatively benign temperatures to less than 20% as birds approached their maximum sustainable energy intake. Ultimate spare capacity enabled an increase in feeding rate of approximately 126% as measured in birds acclimated for weeks at −29°C compared with +21°C. Increased gut size and not tissue-specific differences in nutrient uptake or changes in digestive efficiency or retention time were primarily responsible for this increase in capacity with energy demand, and this change required more than 1–2 days. Thus, the pace of change in digestive organ size may often constrain energy intake and, for birds, retard the pace of their migration. PMID:24718764
McWilliams, Scott R; Karasov, William H
2014-05-22
Flexible phenotypes enable animals to live in environments that change over space and time, and knowing the limits to and the required time scale for this flexibility provides insights into constraints on energy and nutrient intake, diet diversity and niche width. We quantified the level of immediate and ultimate spare capacity, and thus the extent of phenotypic flexibility, in the digestive system of a migratory bird in response to increased energy demand, and identified the digestive constraints responsible for the limits on sustained energy intake. Immediate spare capacity decreased from approximately 50% for birds acclimated to relatively benign temperatures to less than 20% as birds approached their maximum sustainable energy intake. Ultimate spare capacity enabled an increase in feeding rate of approximately 126% as measured in birds acclimated for weeks at -29°C compared with +21°C. Increased gut size and not tissue-specific differences in nutrient uptake or changes in digestive efficiency or retention time were primarily responsible for this increase in capacity with energy demand, and this change required more than 1-2 days. Thus, the pace of change in digestive organ size may often constrain energy intake and, for birds, retard the pace of their migration.
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
29 CFR 1926.251 - Rigging equipment for material handling.
Code of Federal Regulations, 2012 CFR
2012-07-01
... splices, the U-bolt shall be applied so that the “U” section is in contact with the dead end of the rope... laid grommets and endless slings shall have a minimum circumferential length of 96 times their body... body of the rope using at least two additional tucks (which will require a tail length of approximately...
Rate of woody residue incorporation into Northern Rocky Mountain forest soils
A. E. Harvey; M. J. Larsen; M. F. Jurgensen
1981-01-01
The important properties contributed to forest soils by decayed wood in the Northern Rocky Mountains make it desirable to determine the time required to reconstitute such materials in depleted soils. The ratio of fiber production potential (growth) to total quantity of wood in a steady state ecosystem provides estimates varying from approximately 100 to 300 years,...
An electric-analog simulation of elliptic partial differential equations using finite element theory
Franke, O.L.; Pinder, G.F.; Patten, E.P.
1982-01-01
Elliptic partial differential equations can be solved using the Galerkin-finite element method to generate the approximating algebraic equations, and an electrical network to solve the resulting matrices. Some element configurations require the use of networks containing negative resistances which, while physically realizable, are more expensive and time-consuming to construct. ?? 1982.
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
2018-01-09
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N
2018-02-13
Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Function approximation using combined unsupervised and supervised learning.
Andras, Peter
2014-03-01
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
Application of Weibull analysis to SSME hardware
NASA Technical Reports Server (NTRS)
Gray, L. A. B.
1986-01-01
Generally, it has been documented that the wearing of engine parts forms a failure distribution which can be approximated by a function developed by Weibull. The purpose here is to examine to what extent the Weibull distribution approximates failure data for designated engine parts of the Space Shuttle Main Engine (SSME). The current testing certification requirements will be examined in order to establish confidence levels. An examination of the failure history of SSME parts/assemblies (turbine blades, main combustion chamber, or high pressure fuel pump first stage impellers) which are limited in usage by time or starts will be done by using updated Weibull techniques. Efforts will be made by the investigator to predict failure trends by using Weibull techniques for SSME parts (turbine temperature sensors, chamber pressure transducers, actuators, and controllers) which are not severely limited by time or starts.
A function approximation approach to anomaly detection in propulsion system test data
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Hoyt, W. A.
1993-01-01
Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.
Perching and takeoff of a robotic insect on overhangs using switchable electrostatic adhesion.
Graule, M A; Chirarattananon, P; Fuller, S B; Jafferis, N T; Ma, K Y; Spenko, M; Kornbluh, R; Wood, R J
2016-05-20
For aerial robots, maintaining a high vantage point for an extended time is crucial in many applications. However, available on-board power and mechanical fatigue constrain their flight time, especially for smaller, battery-powered aircraft. Perching on elevated structures is a biologically inspired approach to overcome these limitations. Previous perching robots have required specific material properties for the landing sites, such as surface asperities for spines, or ferromagnetism. We describe a switchable electroadhesive that enables controlled perching and detachment on nearly any material while requiring approximately three orders of magnitude less power than required to sustain flight. These electroadhesives are designed, characterized, and used to demonstrate a flying robotic insect able to robustly perch on a wide range of materials, including glass, wood, and a natural leaf. Copyright © 2016, American Association for the Advancement of Science.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
A queueing network model to analyze the impact of parallelization of care on patient cycle time.
Jiang, Lixiang; Giachetti, Ronald E
2008-09-01
The total time a patient spends in an outpatient facility, called the patient cycle time, is a major contributor to overall patient satisfaction. A frequently recommended strategy to reduce the total time is to perform some activities in parallel thereby shortening patient cycle time. To analyze patient cycle time this paper extends and improves upon existing multi-class open queueing network model (MOQN) so that the patient flow in an urgent care center can be modeled. Results of the model are analyzed using data from an urgent care center contemplating greater parallelization of patient care activities. The results indicate that parallelization can reduce the cycle time for those patient classes which require more than one diagnostic and/ or treatment intervention. However, for many patient classes there would be little if any improvement, indicating the importance of tools to analyze business process reengineering rules. The paper makes contributions by implementing an approximation for fork/join queues in the network and by improving the approximation for multiple server queues in both low traffic and high traffic conditions. We demonstrate the accuracy of the MOQN results through comparisons to simulation results.
Tissue stimulator enclosure welding fixture
NASA Technical Reports Server (NTRS)
Mcclure, S. R.
1977-01-01
It was demonstrated that the thickness of the stimulator titanium enclosure is directly related to the battery recharge time cycle. Reduction of the titanium enclosure thickness from approximately 0.37 mm (0.015 inch) to 0.05 mm (0.002 inch) significantly reduced the recharge time cycle and thereby patient inconvenience. However, fabrication of titanium enclosures from the thinner material introduced problems in forming, holding, and welding that required improvement in state of the art shop practices. The procedures that were utilized to resolve these fabrication problems are described.
2006-11-30
except in the simplest of circumstances. This belief has driven the com- putational research community to devise clever kinetic Monte Carlo ( KMC ... KMC rou- tine is very slow; cutting the error in half requires four times the number of simulations. Since a single simulation may contain huge numbers...subintervals [9–14]. Both approximation types, system partitioning and τ leaping, have been very successful in increasing the scope of problems to which KMC
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
A hybrid continuous-discrete method for stochastic reaction–diffusion processes
Zheng, Likun; Nie, Qing
2016-01-01
Stochastic fluctuations in reaction–diffusion processes often have substantial effect on spatial and temporal dynamics of signal transductions in complex biological systems. One popular approach for simulating these processes is to divide the system into small spatial compartments assuming that molecules react only within the same compartment and jump between adjacent compartments driven by the diffusion. While the approach is convenient in terms of its implementation, its computational cost may become prohibitive when diffusive jumps occur significantly more frequently than reactions, as in the case of rapid diffusion. Here, we present a hybrid continuous-discrete method in which diffusion is simulated using continuous approximation while reactions are based on the Gillespie algorithm. Specifically, the diffusive jumps are approximated as continuous Gaussian random vectors with time-dependent means and covariances, allowing use of a large time step, even for rapid diffusion. By considering the correlation among diffusive jumps, the approximation is accurate for the second moment of the diffusion process. In addition, a criterion is obtained for identifying the region in which such diffusion approximation is required to enable adaptive calculations for better accuracy. Applications to a linear diffusion system and two nonlinear systems of morphogens demonstrate the effectiveness and benefits of the new hybrid method. PMID:27703710
Laleian, Artin; Valocchi, Albert J.; Werth, Charles J.
2015-11-24
Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous dragmore » with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. Additionally, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.« less
NASA Technical Reports Server (NTRS)
Adamczyk, J. L.
1974-01-01
An approximate solution is reported for the unsteady aerodynamic response of an infinite swept wing encountering a vertical oblique gust in a compressible stream. The approximate expressions are of closed form and do not require excessive computer storage or computation time, and further, they are in good agreement with the results of exact theory. This analysis is used to predict the unsteady aerodynamic response of a helicopter rotor blade encountering the trailing vortex from a previous blade. Significant effects of three dimensionality and compressibility are evident in the results obtained. In addition, an approximate solution for the unsteady aerodynamic forces associated with the pitching or plunging motion of a two dimensional airfoil in a subsonic stream is presented. The mathematical form of this solution approaches the incompressible solution as the Mach number vanishes, the linear transonic solution as the Mach number approaches one, and the solution predicted by piston theory as the reduced frequency becomes large.
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lubricin: A novel means to decrease bacterial adhesion and proliferation
Aninwene, George E.; Abadian, Pegah N.; Ravi, Vishnu; Taylor, Erik N.; Hall, Douglas M.; Mei, Amy; Jay, Gregory D.; Goluch, Edgar D.; Webster, Thomas J.
2015-01-01
This study investigated the ability of lubricin (LUB) to prevent bacterial attachment and proliferation on model tissue culture polystyrene surfaces. The findings from this study indicated that LUB was able to reduce the attachment and growth of Staphylococcus aureus on tissue culture polystyrene over the course of 24 h by approximately 13.9% compared to a phosphate buffered saline (PBS)-soaked control. LUB also increased S. aureus lag time (the period of time between the introduction of bacteria to a new environment and their exponential growth) by approximately 27% compared to a PBS-soaked control. This study also indicated that vitronectin (VTN), a protein homologous to LUB, reduced bacterial S. aureus adhesion and growth on tissue culture polystyrene by approximately 11% compared to a PBS-soaked control. VTN also increased the lag time of S. aureus by approximately 43%, compared to a PBS-soaked control. Bovine submaxillary mucin was studied because there are similarities between it and the center mucin-like domain of LUB. Results showed that the reduction of S. aureus and Staphylococcus epidermidis proliferation on mucin coated surfaces was not as substantial as that seen with LUB. In summary, this study provided the first evidence that LUB reduced the initial adhesion and growth of both S. aureus and S. epidermidis on a model surface to suppress biofilm formation. These reductions in initial bacteria adhesion and proliferation can be beneficial for medical implants and, although requiring more study, can lead to drastically improved patient outcomes. PMID:24737699
On-patient see-through augmented reality based on visual SLAM.
Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M
2017-01-01
An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.
Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.
2013-01-01
NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.
ERIC Educational Resources Information Center
Ku, James Yu-Fan
2016-01-01
Obtaining a degree from a community college could be the opportunity for students to advance their education or career. Nevertheless, nearly two-thirds of first-time community college students in the U.S. were required to take developmental mathematics courses. The problem was that approximately three-fourth of those students did not successfully…
ERIC Educational Resources Information Center
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias
2017-01-01
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Site index charts for Douglas-fir in the Pacific Northwest.
Grover A. Choate; Floyd A. Johnson
1958-01-01
Charts in this report can be used to estimate site index for Douglas-fir from stand age and from average total height of dominant and codominant trees. Table 1 and figure 2 in USDA Technical Bulletin 201 have been used for this purpose in the past. However, the table requires time-consuming interpolation and the figure gives only rough approximations.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less
Viglianti, G A; Rubinstein, E P; Graves, K L
1992-01-01
The untranslated leader sequences of rhesus macaque simian immunodeficiency virus mRNAs form a stable secondary structure, TAR. This structure can be modified by RNA splicing. In this study, the role of TAR splicing in virus replication was investigated. The proportion of viral RNAs containing a spliced TAR structure is high early after infection and decreases at later times. Moreover, proviruses containing mutations which prevent TAR splicing are significantly delayed in replication. These mutant viruses require approximately 20 days to achieve half-maximal virus production, in contrast to wild-type viruses, which require approximately 8 days. We attribute this delay to the inefficient translation of unspliced-TAR-containing mRNAs. The molecular basis for this translational effect was examined in in vitro assays. We found that spliced-TAR-containing mRNAs were translated up to 8.5 times more efficiently than were similar mRNAs containing an unspliced TAR leader. Furthermore, these spliced-TAR-containing mRNAs were more efficiently associated with ribosomes. We postulate that the level of TAR splicing provides a balance for the optimal expression of both viral proteins and genomic RNA and therefore ultimately controls the production of infectious virions. Images PMID:1629957
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
Variationally consistent approximation scheme for charge transfer
NASA Technical Reports Server (NTRS)
Halpern, A. M.
1978-01-01
The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.
Stachowiak, Jeanne C; Shugard, Erin E; Mosier, Bruce P; Renzi, Ronald F; Caton, Pamela F; Ferko, Scott M; Van de Vreugde, James L; Yee, Daniel D; Haroldsen, Brent L; VanderNoot, Victoria A
2007-08-01
For domestic and military security, an autonomous system capable of continuously monitoring for airborne biothreat agents is necessary. At present, no system meets the requirements for size, speed, sensitivity, and selectivity to warn against and lead to the prevention of infection in field settings. We present a fully automated system for the detection of aerosolized bacterial biothreat agents such as Bacillus subtilis (surrogate for Bacillus anthracis) based on protein profiling by chip gel electrophoresis coupled with a microfluidic sample preparation system. Protein profiling has previously been demonstrated to differentiate between bacterial organisms. With the goal of reducing response time, multiple microfluidic component modules, including aerosol collection via a commercially available collector, concentration, thermochemical lysis, size exclusion chromatography, fluorescent labeling, and chip gel electrophoresis were integrated together to create an autonomous collection/sample preparation/analysis system. The cycle time for sample preparation was approximately 5 min, while total cycle time, including chip gel electrophoresis, was approximately 10 min. Sensitivity of the coupled system for the detection of B. subtilis spores was 16 agent-containing particles per liter of air, based on samples that were prepared to simulate those collected by wetted cyclone aerosol collector of approximately 80% efficiency operating for 7 min.
[Pharmaceutical logistic in turnover of pharmaceutical products of Azerbaijan].
Dzhalilova, K I
2009-11-01
Development of pharmaceutical logistic system model promotes optimal strategy for pharmaceutical functioning. The goal of such systems is organization of pharmaceutical product's turnover in required quantity and assortment, at preset time and place, at a highest possible degree of consumption readiness with minimal expenses and qualitative service. Organization of the optimal turnover chain in the region is offered to start from approximate classification of medicaments by logistic characteristics. Supplier selection was performed by evaluation of timeliness of delivery, quality of delivered products (according to the minimum acceptable level of quality) and time-keeping of time spending for orders delivery.
Karev, Georgy P; Wolf, Yuri I; Berezovskaya, Faina S; Koonin, Eugene V
2004-09-09
The size distribution of gene families in a broad range of genomes is well approximated by a generalized Pareto function. Evolution of ensembles of gene families can be described with Birth, Death, and Innovation Models (BDIMs). Analysis of the properties of different versions of BDIMs has the potential of revealing important features of genome evolution. In this work, we extend our previous analysis of stochastic BDIMs. In addition to the previously examined rational BDIMs, we introduce potentially more realistic logistic BDIMs, in which birth/death rates are limited for the largest families, and show that their properties are similar to those of models that include no such limitation. We show that the mean time required for the formation of the largest gene families detected in eukaryotic genomes is limited by the mean number of duplications per gene and does not increase indefinitely with the model degree. Instead, this time reaches a minimum value, which corresponds to a non-linear rational BDIM with the degree of approximately 2.7. Even for this BDIM, the mean time of the largest family formation is orders of magnitude greater than any realistic estimates based on the timescale of life's evolution. We employed the embedding chains technique to estimate the expected number of elementary evolutionary events (gene duplications and deletions) preceding the formation of gene families of the observed size and found that the mean number of events exceeds the family size by orders of magnitude, suggesting a highly dynamic process of genome evolution. The variance of the time required for the formation of the largest families was found to be extremely large, with the coefficient of variation > 1. This indicates that some gene families might grow much faster than the mean rate such that the minimal time required for family formation is more relevant for a realistic representation of genome evolution than the mean time. We determined this minimal time using Monte Carlo simulations of family growth from an ensemble of simultaneously evolving singletons. In these simulations, the time elapsed before the formation of the largest family was much shorter than the estimated mean time and was compatible with the timescale of evolution of eukaryotes. The analysis of stochastic BDIMs presented here shows that non-linear versions of such models can well approximate not only the size distribution of gene families but also the dynamics of their formation during genome evolution. The fact that only higher degree BDIMs are compatible with the observed characteristics of genome evolution suggests that the growth of gene families is self-accelerating, which might reflect differential selective pressure acting on different genes.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
Single-chip pulse programmer for magnetic resonance imaging using a 32-bit microcontroller.
Handa, Shinya; Domalain, Thierry; Kose, Katsumi
2007-08-01
A magnetic resonance imaging (MRI) pulse programmer has been developed using a single-chip microcontroller (ADmicroC7026). The microcontroller includes all the components required for the MRI pulse programmer: a 32-bit RISC CPU core, 62 kbytes of flash memory, 8 kbytes of SRAM, two 32-bit timers, four 12-bit DA converters, and 40 bits of general purpose I/O. An evaluation board for the microcontroller was connected to a host personal computer (PC), an MRI transceiver, and a gradient driver using interface circuitry. Target (embedded) and host PC programs were developed to enable MRI pulse sequence generation by the microcontroller. The pulse programmer achieved a (nominal) time resolution of approximately 100 ns and a minimum time delay between successive events of approximately 9 micros. Imaging experiments using the pulse programmer demonstrated the effectiveness of our approach.
Single-chip pulse programmer for magnetic resonance imaging using a 32-bit microcontroller
NASA Astrophysics Data System (ADS)
Handa, Shinya; Domalain, Thierry; Kose, Katsumi
2007-08-01
A magnetic resonance imaging (MRI) pulse programmer has been developed using a single-chip microcontroller (ADμC7026). The microcontroller includes all the components required for the MRI pulse programmer: a 32-bit RISC CPU core, 62kbytes of flash memory, 8kbytes of SRAM, two 32-bit timers, four 12-bit DA converters, and 40bits of general purpose I/O. An evaluation board for the microcontroller was connected to a host personal computer (PC), an MRI transceiver, and a gradient driver using interface circuitry. Target (embedded) and host PC programs were developed to enable MRI pulse sequence generation by the microcontroller. The pulse programmer achieved a (nominal) time resolution of approximately 100ns and a minimum time delay between successive events of approximately 9μs. Imaging experiments using the pulse programmer demonstrated the effectiveness of our approach.
GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.
Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka
2016-01-01
Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.
Dual Key Speech Encryption Algorithm Based Underdetermined BSS
Zhao, Huan; Chen, Zuo; Zhang, Xixiang
2014-01-01
When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430
NASA Astrophysics Data System (ADS)
Yang, Xiong; Liu, Derong; Wang, Ding
2014-03-01
In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.
NASA Technical Reports Server (NTRS)
Williams, Craig Hamilton
1995-01-01
A simple, analytic approximation is derived to calculate trip time and performance for propulsion systems of very high specific impulse (50,000 to 200,000 seconds) and very high specific power (10 to 1000 kW/kg) for human interplanetary space missions. The approach assumed field-free space, constant thrust/constant specific power, and near straight line (radial) trajectories between the planets. Closed form, one dimensional equations of motion for two-burn rendezvous and four-burn round trip missions are derived as a function of specific impulse, specific power, and propellant mass ratio. The equations are coupled to an optimizing parameter that maximizes performance and minimizes trip time. Data generated for hypothetical one-way and round trip human missions to Jupiter were found to be within 1% and 6% accuracy of integrated solutions respectively, verifying that for these systems, credible analysis does not require computationally intensive numerical techniques.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
Excimer laser produced plasmas in copper wire targets and water droplets
NASA Technical Reports Server (NTRS)
Song, Kyo-Dong; Alexander, D. R.
1994-01-01
Elastically scattered incident radiation (ESIR) from a copper wire target illuminated by a KrF laser pulse at lambda = 248 nm shows a dinstinct two-peak structure which is dependent on the incident energy. The time required to reach the critical electron density (n(sub c) approximately = 1.8 x 10(exp 22) electrons/cu cm) is estimated at 11 ns based on experimental results. Detailed ESIR characteristics for water have been reported previously by the authors. Initiation of the broadband emission for copper plasma begins at 6.5 +/- 1.45 ns after the arrival of the laser pulse. However, the broadband emission occurs at 11 +/- 0.36 ns for water. For a diatomic substance such as water, the electron energy rapidly dissipates due to dissociation of water molecules, which is absent in a monatomic species such as copper. When the energy falls below the excitation energy of the lowest electron state for water, it becomes a subexcitation electron. Lifetimes of the subexcited electrons to the vibrational states are estimated to be of the order of 10(exp -9) s. In addition, the ionization potential of copper (440-530 nm) is approximately 6 eV, which is about two times smaller than the 13 eV ionization potential reported for water. The higher ionization potential contributes to the longer observed delay time for plasma formation in water. After initiation, a longer time is required for copper plasma to reach its peak value. This time delay in reaching the maximum intensity is attributed to the energy loss during the interband transition in copper.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.
1989-08-01
The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less
Utilising shade to optimize UV exposure for vitamin D
NASA Astrophysics Data System (ADS)
Turnbull, D. J.; Parisi, A. V.
2008-06-01
Numerous studies have stated that humans need to utilise full sun radiation, at certain times of the day, to assist the body in synthesising the required levels of vitamin D3. The time needed to be spent in the full sun depends on a number of factors, for example, age, skin type, latitude, solar zenith angle. Current Australian guidelines suggest exposure to approximately 1/6 to 1/3 of a minimum erythemal dose (MED), depending on age, would be appropriate to provide adequate vitamin D3 levels. The aim of the study was to determine the exposure times to diffuse solar UV to receive exposures of 1/6 and 1/3 MED for a changing solar zenith angle in order to assess the possible role that diffuse UV (scattered radiation) may play in vitamin D3 effective UV exposures (UVD3). Diffuse and global erythemal UV measurements were conducted at five minute intervals over a twelve month period for a solar zenith angle range of 4° to 80° at a latitude of 27.6° S. For a diffuse UV exposure of 1/3 MED, solar zenith angles smaller than approximately 50° can be utilised for exposure times of less than 10 min. Spectral measurements showed that, for a solar zenith angle of 40°, the UVA (315-400 nm) in the diffuse component of the solar UV is reduced by approximately 62% compared to the UVA in the global UV, whereas UVD3 wavelengths are only reduced by approximately 43%. At certain latitudes, diffuse UV under shade may play an important role in providing the human body with adequate levels of UVD3 (290-315 nm) radiation without experiencing the high levels of UVA observed in full sun.
``Sleeping reactor`` irradiations: Shutdown reactor determination of short-lived activation products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jerde, E.A.; Glasgow, D.C.
1998-09-01
At the High-Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory, the principal irradiation system has a thermal neutron flux ({phi}) of {approximately} 4 {times} 10{sup 14} n/cm{sup 2} {center_dot} s, permitting the detection of elements via irradiation of 60 s or less. Irradiations of 6 or 7 s are acceptable for detection of elements with half-lives of as little as 30 min. However, important elements such as Al, Mg, Ti, and V have half-lives of only a few minutes. At HFIR, these can be determined with irradiation times of {approximately} 6 s, but the requirement of immediate countingmore » leads to increased exposure to the high activity produced by irradiation in the high flux. In addition, pneumatic system timing uncertainties (about {+-} 0.5 s) make irradiations of < 6 s less reliable. Therefore, the determination of these ultra-short-lived species in mixed matrices has not generally been made at HFIR. The authors have found that very short lived activation products can be produced easily during the period after reactor shutdown (SCRAM), but prior to the removal of spent fuel elements. During this 24- to 36-h period (dubbed the ``sleeping reactor``), neutrons are produced in the beryllium reflector by the reaction {sup 9}Be({gamma},n){sup 8}Be, the gamma rays principally originating in the spent fuel. Upon reactor SCRAM, the flux drops to {approximately} 1 {times} 10{sup 10} n/cm{sup 2} {center_dot} s within 1 h. By the time the fuel elements are removed, the flux has dropped to {approximately} 6 {times} 10{sup 8}. Such fluxes are ideal for the determination of short-lived elements such as Al, Ti, Mg, and V. An important feature of the sleeping reactor is a flux that is not constant.« less
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vaníček, Jiří; Heller, Eric J.
2003-11-01
We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.
Silicon Carbide Radioisotope Batteries
NASA Technical Reports Server (NTRS)
Rybicki, George C.
2005-01-01
The substantial radiation resistance and large bandgap of SiC semiconductor materials makes them an attractive candidate for application in a high efficiency, long life radioisotope battery. To evaluate their potential in this application, simulated batteries were constructed using SiC diodes and the alpha particle emitter Americium Am-241 or the beta particle emitter Promethium Pm-147. The Am-241 based battery showed high initial power output and an initial conversion efficiency of approximately 16%, but the power output decayed 52% in 500 hours due to radiation damage. In contrast the Pm-147 based battery showed a similar power output level and an initial conversion efficiency of approximately 0.6%, but no degradation was observed in 500 hours. However, the Pm-147 battery required approximately 1000 times the particle fluence as the Am-242 battery to achieve a similar power output. The advantages and disadvantages of each type of battery and suggestions for future improvements will be discussed.
Optimization of Car Body under Constraints of Noise, Vibration, and Harshness (NVH), and Crash
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yang, Ren-Jye; Sobieszczanski-Sobieski, Jaroslaw (Editor)
2000-01-01
To be competitive on the today's market, cars have to be as light as possible while meeting the Noise, Vibration, and Harshness (NVH) requirements and conforming to Government-man dated crash survival regulations. The latter are difficult to meet because they involve very compute-intensive, nonlinear analysis, e.g., the code RADIOSS capable of simulation of the dynamics, and the geometrical and material nonlinearities of a thin-walled car structure in crash, would require over 12 days of elapsed time for a single design of a 390K elastic degrees of freedom model, if executed on a single processor of the state-of-the-art SGI Origin2000 computer. Of course, in optimization that crash analysis would have to be invoked many times. Needless to say, that has rendered such optimization intractable until now. The car finite element model is shown. The advent of computers that comprise large numbers of concurrently operating processors has created a new environment wherein the above optimization, and other engineering problems heretofore regarded as intractable may be solved. The procedure, shown, is a piecewise approximation based method and involves using a sensitivity based Taylor series approximation model for NVH and a polynomial response surface model for Crash. In that method the NVH constraints are evaluated using a finite element code (MSC/NASTRAN) that yields the constraint values and their derivatives with respect to design variables. The crash constraints are evaluated using the explicit code RADIOSS on the Origin 2000 operating on 256 processors simultaneously to generate data for a polynomial response surface in the design variable domain. The NVH constraints and their derivatives combined with the response surface for the crash constraints form an approximation to the system analysis (surrogate analysis) that enables a cycle of multidisciplinary optimization within move limits. In the inner loop, the NVH sensitivities are recomputed to update the NVH approximation model while keeping the Crash response surface constant. In every outer loop, the Crash response surface approximation is updated, including a gradual increase in the order of the response surface and the response surface extension in the direction of the search. In this optimization task, the NVH discipline has 30 design variables while the crash discipline has 20 design variables. A subset of these design variables (10) are common to both the NVH and crash disciplines. In order to construct a linear response surface for the Crash discipline constraints, a minimum of 21 design points would have to be analyzed using the RADIOSS code. On a single processor in Origin 2000 that amount of computing would require over 9 months! In this work, these runs were carried out concurrently on the Origin 2000 using multiple processors, ranging from 8 to 16, for each crash (RADIOSS) analysis. Another figure shows the wall time required for a single RADIOSS analysis using varying number of processors, as well as provides a comparison of 2 different common data placement procedures within the allotted memories for each analysis. The initial design is an infeasible design with NVH discipline Static Torsion constraint violations of over 10%. The final optimized design is a feasible design with a weight reduction of 15 kg compared to the initial design. This work demonstrates how advanced methodology for optimization combined with the technology of concurrent processing enables applications that until now were out of reach because of very long time-to-solution.
Branching-ratio approximation for the self-exciting Hawkes process
NASA Astrophysics Data System (ADS)
Hardiman, Stephen J.; Bouchaud, Jean-Philippe
2014-12-01
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
NASA Technical Reports Server (NTRS)
Egelkrout, D. W.; Horne, W. E.
1980-01-01
Electrostatic bonding (ESB) of thin (3 mil) Corning 7070 cover glasses to Ta2O5 AR-coated thin (2 mil) silicon wafers and solar cells is investigated. An experimental program was conducted to establish the effects of variations in pressure, voltage, temperature, time, Ta2O5 thickness, and various prebond glass treatments. Flat wafers without contact grids were used to study the basic effects for bonding to semiconductor surfaces typical of solar cells. Solar cells with three different grid patterns were used to determine additional requirements caused by the raised metallic contacts.
Papadimitropoulos, Adam; Rovithakis, George A; Parisini, Thomas
2007-07-01
In this paper, the problem of fault detection in mechanical systems performing linear motion, under the action of friction phenomena is addressed. The friction effects are modeled through the dynamic LuGre model. The proposed architecture is built upon an online neural network (NN) approximator, which requires only system's position and velocity. The friction internal state is not assumed to be available for measurement. The neural fault detection methodology is analyzed with respect to its robustness and sensitivity properties. Rigorous fault detectability conditions and upper bounds for the detection time are also derived. Extensive simulation results showing the effectiveness of the proposed methodology are provided, including a real case study on an industrial actuator.
ERIC Educational Resources Information Center
Odic, Darko
2018-01-01
Young children can quickly and intuitively represent the number of objects in a visual scene through the Approximate Number System (ANS). The precision of the ANS--indexed as the most difficult ratio of two numbers that children can reliably discriminate--is well known to improve with development: whereas infants require relatively large ratios to…
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1999-01-01
The technical details are summarized below: Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. . An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing.
Wigner phase space distribution via classical adiabatic switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801
2015-09-21
Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Berendes, David M; Sumner, Trent A; Brown, Joe M
2017-03-07
Although global access to sanitation is increasing, safe management of fecal waste is a rapidly growing challenge in low- and middle-income countries (LMICs). The goal of this study was to evaluate the current need for fecal sludge management (FSM) in LMICs by region, urban/rural status, and wealth. Recent Demographic and Health Survey data from 58 countries (847 685 surveys) were used to classify households by sanitation facility (facilities needing FSM, sewered facilities, ecological sanitation/other, or no facilities). Onsite piped water infrastructure was quantified to approximate need for wastewater management and downstream treatment. Over all surveyed nations, 63% of households used facilities requiring FSM, totaling approximately 1.8 billion people. Rural areas had similar proportions of toilets requiring FSM as urban areas. FSM needs scaled inversely with wealth: in the poorest quintile, households' sanitation facilities were almost 170 times more likely to require FSM (vs sewerage) than in the richest quintile. About one out of five households needing FSM had onsite piped water infrastructure, indicating domestic or reticulated wastewater infrastructure may be required if lacking for safe management of aqueous waste streams. FSM strategies must be included in future sanitation investment to achieve safe management of fecal wastes and protect public health.
Catling, David C; Glein, Christopher R; Zahnle, Kevin J; McKay, Christopher P
2005-06-01
Life is constructed from a limited toolkit: the Periodic Table. The reduction of oxygen provides the largest free energy release per electron transfer, except for the reduction of fluorine and chlorine. However, the bonding of O2 ensures that it is sufficiently stable to accumulate in a planetary atmosphere, whereas the more weakly bonded halogen gases are far too reactive ever to achieve significant abundance. Consequently, an atmosphere rich in O2 provides the largest feasible energy source. This universal uniqueness suggests that abundant O2 is necessary for the high-energy demands of complex life anywhere, i.e., for actively mobile organisms of approximately 10(-1)-10(0) m size scale with specialized, differentiated anatomy comparable to advanced metazoans. On Earth, aerobic metabolism provides about an order of magnitude more energy for a given intake of food than anaerobic metabolism. As a result, anaerobes do not grow beyond the complexity of uniseriate filaments of cells because of prohibitively low growth efficiencies in a food chain. The biomass cumulative number density, n, at a particular mass, m, scales as n (> m) proportional to m(-1) for aquatic aerobes, and we show that for anaerobes the predicted scaling is n proportional to m (-1.5), close to a growth-limited threshold. Even with aerobic metabolism, the partial pressure of atmospheric O2 (P(O2)) must exceed approximately 10(3) Pa to allow organisms that rely on O2 diffusion to evolve to a size approximately 10(3) m x P(O2) in the range approximately 10(3)-10(4) Pa is needed to exceed the threshold of approximately 10(2) m size for complex life with circulatory physiology. In terrestrial life, O(2) also facilitates hundreds of metabolic pathways, including those that make specialized structural molecules found only in animals. The time scale to reach P(O(2)) approximately 10(4) Pa, or "oxygenation time," was long on the Earth (approximately 3.9 billion years), within almost a factor of 2 of the Sun's main sequence lifetime. Consequently, we argue that the oxygenation time is likely to be a key rate-limiting step in the evolution of complex life on other habitable planets. The oxygenation time could preclude complex life on Earth-like planets orbiting short-lived stars that end their main sequence lives before planetary oxygenation takes place. Conversely, Earth-like planets orbiting long-lived stars are potentially favorable habitats for complex life.
Cheng, Kung-Shan; Yuan, Yu; Li, Zhen; Stauffer, Paul R; Maccarini, Paolo; Joines, William T; Dewhirst, Mark W; Das, Shiva K
2009-04-07
In large multi-antenna systems, adaptive controllers can aid in steering the heat focus toward the tumor. However, the large number of sources can greatly increase the steering time. Additionally, controller performance can be degraded due to changes in tissue perfusion which vary non-linearly with temperature, as well as with time and spatial position. The current work investigates whether a reduced-order controller with the assumption of piecewise constant perfusion is robust to temperature-dependent perfusion and achieves steering in a shorter time than required by a full-order controller. The reduced-order controller assumes that the optimal heating setting lies in a subspace spanned by the best heating vectors (virtual sources) of an initial, approximate, patient model. An initial, approximate, reduced-order model is iteratively updated by the controller, using feedback thermal images, until convergence of the heat focus to the tumor. Numerical tests were conducted in a patient model with a right lower leg sarcoma, heated in a 10-antenna cylindrical mini-annual phased array applicator operating at 150 MHz. A half-Gaussian model was used to simulate temperature-dependent perfusion. Simulated magnetic resonance temperature images were used as feedback at each iteration step. Robustness was validated for the controller, starting from four approximate initial models: (1) a 'standard' constant perfusion lower leg model ('standard' implies a model that exactly models the patient with the exception that perfusion is considered constant, i.e., not temperature dependent), (2) a model with electrical and thermal tissue properties varied from 50% higher to 50% lower than the standard model, (3) a simplified constant perfusion pure-muscle lower leg model with +/-50% deviated properties and (4) a standard model with the tumor position in the leg shifted by 1.5 cm. Convergence to the desired focus of heating in the tumor was achieved for all four simulated models. The controller accomplished satisfactory therapeutic outcomes: approximately 80% of the tumor was heated to temperatures 43 degrees C and approximately 93% was maintained at temperatures <41 degrees C. Compared to the controller without model reduction, a approximately 9-25 fold reduction in convergence time was accomplished using approximately 2-3 orthonormal virtual sources. In the situations tested, the controller was robust to the presence of temperature-dependent perfusion. The results of this work can help to lay the foundation for real-time thermal control of multi-antenna hyperthermia systems in clinical situations where perfusion can change rapidly with temperature.
Cdc7 is required throughout the yeast S phase to activate replication origins.
Donaldson, A D; Fangman, W L; Brewer, B J
1998-02-15
The long-standing conclusion that the Cdc7 kinase of Saccharomyces cerevisiae is required only to trigger S phase has been challenged by recent data that suggests it acts directly on individual replication origins. We tested the possibility that early- and late-activated origins have different requirements for Cdc7 activity. Cells carrying a cdc7(ts) allele were first arrested in G1 at the cdc7 block by incubation at 37 degrees C, and then were allowed to enter S phase by brief incubation at 23 degrees C. During the S phase, after return to 37 degrees C, early-firing replication origins were activated, but late origins failed to fire. Similarly, a plasmid with a late-activated origin was defective in replication. As a consequence of the origin activation defect, duplication of chromosomal sequences that are normally replicated from late origins was greatly delayed. Early-replicating regions of the genome duplicated at approximately their normal time. The requirements of early and late origins for Cdc7 appear to be temporally rather than quantitatively different, as reducing overall levels of Cdc7 by growth at semi-permissive temperature reduced activation at early and late origins approximately equally. Our results show that Cdc7 activates early and late origins separately, with late origins requiring the activity later in S phase to permit replication initiation.
Multi-level methods and approximating distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less
Efficient implementation of neural network deinterlacing
NASA Astrophysics Data System (ADS)
Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee
2009-02-01
Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.
Inertial energy storage for advanced space station applications
NASA Technical Reports Server (NTRS)
Van Tassel, K. E.; Simon, W. E.
1985-01-01
Because the NASA Space Station will spend approximately one-third of its orbital time in the earth's shadow, depriving it of solar energy and requiring an energy storage system to meet system demands, attention has been given to flywheel energy storage systems. These systems promise high mechanical efficiency, long life, light weight, flexible design, and easily monitored depth of discharge. An assessment is presently made of three critical technology areas: rotor materials, magnetic suspension bearings, and motor-generators for energy conversion. Conclusions are presented regarding the viability of inertial energy storage systems and of problem areas requiring further technology development efforts.
Multi-trip vehicle routing and scheduling problem with time window in real life
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Chiew, Kang-Leng; Sze, Jeeu-Fong
2012-09-01
This paper studies a manpower scheduling problem with multiple maintenance operations and vehicle routing considerations. Service teams located at a common service centre are required to travel to different customer sites. All customers must be served within given time window, which are known in advance. The scheduling process must take into consideration complex constraints such as a meal break during the team's shift, multiple travelling trips, synchronisation of service teams and working shifts. The main objective of this study is to develop a heuristic that can generate high quality solution in short time for large problem instances. A Two-stage Scheduling Heuristic is developed for different variants of the problem. Empirical results show that the proposed solution performs effectively and efficiently. In addition, our proposed approximation algorithm is very flexible and can be easily adapted to different scheduling environments and operational requirements.
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian
2018-03-01
We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.
He, Pingan; Jagannathan, S
2007-04-01
A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the "strategic" utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysis.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Near Earth Asteroid Human Mission Possibilities Using Nuclear Thermal Rocket (NTR) Propulsion
NASA Technical Reports Server (NTRS)
Borowski, Stanley; McCurdy, David R.; Packard, Thomas W.
2012-01-01
The NTR is a proven technology that generates high thrust and has a specific impulse (Isp (is) approximately 900 s) twice that of today's best chemical rockets. During the Rover and NERVA (Nuclear Engine for Rocket Vehicle Applications) programs, twenty rocket reactors were designed, built and ground tested. These tests demonstrated: (1) a wide range of thrust; (2) high temperature carbide-based nuclear fuel; (3) sustained engine operation; (4) accumulated lifetime; and (5) restart capability - all the requirements needed for a human mission to Mars. Ceramic metal fuel was also evaluated as a backup option. In NASA's recent Mars Design reference Architecture (DRA) 5.0 study, the NTR was selected as the preferred propulsion option because of its proven technology, higher performance, lower launch mass, versatile vehicle design, simple assembly, and growth potential. In contrast to other advanced propulsion options, NTP requires no large technology scale-ups. In fact, the smallest engine tested during the Rover program - the 25 klbf 'Pewee' engine is sufficient for a human Mars mission when used in a clustered engine configuration. The 'Copernicus crewed NTR Mars transfer vehicle design developed for DRA 5.0 has significant capability that can enable reusable '1-year' round trip human missions to candidate near Earth asteroids (NEAs) like 1991 JW in 2027, or 2000 SG344 and Apophis in 2028. A robotic precursor mission to 2000 SG344 in late 2023 could provide an attractive Flight Technology Demonstration of a small NTR engine that is scalable to the 25 klbf-class engine used for human missions 5 years later. In addition to the detailed scientific data gathered from on-site inspection, human NEA missions would also provide a valuable 'check out' function for key elements of the NTR transfer vehicle (its propulsion module, TransHab and life support systems, etc.) in a 'deep space' environment prior to undertaking the longer duration Mars orbital and landing missions that would follow. The initial mass in low Earth orbit required for a mission to Apophis is approximately 323 t consisting of the NTR propulsion module ((is) approximately 138 t), the integrated saddle truss and LH2 drop tank assembly ((is) approximately 123 t), and the 6-crew payload element ((is) approximately 62 t). The later includes a multi-mission Space Excursion Vehicle (MMSEV) used for close-up examination and sample gathering. The total burn time and required restarts on the three 25 klbf 'Pewee-class' engines operating at Isp (is) approximately 906 s, are approximately 76.2 minutes and 4, respectively, well below the 2 hours and 27 restarts demonstrated on the NERVA eXperimental Engine, the NRX-XE. The paper examines the benefits, requirements and characteristics of using NTP for the above NEA missions. The impacts on vehicle design of HLV payload volume and lift capability, crew size, and reusability are also quantified.
Simple algorithms for digital pulse-shape discrimination with liquid scintillation detectors
NASA Astrophysics Data System (ADS)
Alharbi, T.
2015-01-01
The development of compact, battery-powered digital liquid scintillation neutron detection systems for field applications requires digital pulse processing (DPP) algorithms with minimum computational overhead. To meet this demand, two DPP algorithms for the discrimination of neutron and γ-rays with liquid scintillation detectors were developed and examined by using a NE213 liquid scintillation detector in a mixed radiation field. The first algorithm is based on the relation between the amplitude of a current pulse at the output of a photomultiplier tube and the amount of charge contained in the pulse. A figure-of-merit (FOM) value of 0.98 with 450 keVee (electron equivalent energy) energy threshold was achieved with this method when pulses were sampled at 250 MSample/s and with 8-bit resolution. Compared to the similar method of charge-comparison this method requires only a single integration window, thereby reducing the amount of computations by approximately 40%. The second approach is a digital version of the trailing-edge constant-fraction discrimination method. A FOM value of 0.84 with an energy threshold of 450 keVee was achieved with this method. In comparison with the similar method of rise-time discrimination this method requires a single time pick-off, thereby reducing the amount of computations by approximately 50%. The algorithms described in this work are useful for developing portable detection systems for applications such as homeland security, radiation dosimetry and environmental monitoring.
NASA Astrophysics Data System (ADS)
Kruis, Nathanael J. F.
Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
MEANS: python package for Moment Expansion Approximation, iNference and Simulation
Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C.; Kirk, Paul D. W.; Stumpf, Michael P. H.
2016-01-01
Motivation: Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system’s moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. Results: We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. Availability and implementation: https://github.com/theosysbio/means Contacts: m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153663
MEANS: python package for Moment Expansion Approximation, iNference and Simulation.
Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C; Kirk, Paul D W; Stumpf, Michael P H
2016-09-15
Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system's moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. https://github.com/theosysbio/means m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems
NASA Astrophysics Data System (ADS)
Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain
2016-08-01
In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.
Rapid phenotypic antimicrobial susceptibility testing using nanoliter arrays.
Avesar, Jonathan; Rosenfeld, Dekel; Truman-Rosentsvit, Marianna; Ben-Arye, Tom; Geffen, Yuval; Bercovici, Moran; Levenberg, Shulamit
2017-07-18
Antibiotic resistance is a major global health concern that requires action across all sectors of society. In particular, to allow conservative and effective use of antibiotics clinical settings require better diagnostic tools that provide rapid determination of antimicrobial susceptibility. We present a method for rapid and scalable antimicrobial susceptibility testing using stationary nanoliter droplet arrays that is capable of delivering results in approximately half the time of conventional methods, allowing its results to be used the same working day. In addition, we present an algorithm for automated data analysis and a multiplexing system promoting practicality and translatability for clinical settings. We test the efficacy of our approach on numerous clinical isolates and demonstrate a 2-d reduction in diagnostic time when testing bacteria isolated directly from urine samples.
A simplified Integer Cosine Transform and its application in image compression
NASA Technical Reports Server (NTRS)
Costa, M.; Tong, K.
1994-01-01
A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.
Zercher, Florian; Schmidt, Peter; Cieciuch, Jan; Davidov, Eldad
2015-01-01
Over the last decades, large international datasets such as the European Social Survey (ESS), the European Value Study (EVS) and the World Value Survey (WVS) have been collected to compare value means over multiple time points and across many countries. Yet analyzing comparative survey data requires the fulfillment of specific assumptions, i.e., that these values are comparable over time and across countries. Given the large number of groups that can be compared in repeated cross-national datasets, establishing measurement invariance has been, however, considered unrealistic. Indeed, studies which did assess it often failed to establish higher levels of invariance such as scalar invariance. In this paper we first introduce the newly developed approximate approach based on Bayesian structural equation modeling (BSEM) to assess cross-group invariance over countries and time points and contrast the findings with the results from the traditional exact measurement invariance test. BSEM examines whether measurement parameters are approximately (rather than exactly) invariant. We apply BSEM to a subset of items measuring the universalism value from the Portrait Values Questionnaire (PVQ) in the ESS. The invariance of this value is tested simultaneously across 15 ESS countries over six ESS rounds with 173,071 respondents and 90 groups in total. Whereas, the use of the traditional approach only legitimates the comparison of latent means of 37 groups, the Bayesian procedure allows the latent mean comparison of 73 groups. Thus, our empirical application demonstrates for the first time the BSEM test procedure on a particularly large set of groups. PMID:26089811
Child psychiatry: what are we teaching medical students?
Dingle, Arden D
2010-01-01
The author describes child and adolescent psychiatry (CAP) undergraduate teaching in American and Canadian medical schools. A survey asking for information on CAP teaching, student interest in CAP, and opinions about the CAP importance was sent to the medical student psychiatry director at 142 accredited medical schools in the United States and Canada. The results were summarized and various factors considered relevant to CAP student interest were analyzed statistically. Approximately 81% of the schools returned surveys. Most teach required CAP didactics in the preclinical and clinical years. Almost 63% of the schools have CAP clinical rotations; most are not required. Twenty-three percent of all medical students have a clinical CAP experience during their psychiatry clerkship. The majority of schools have CAP electives, and approximately 4.8% of students participate. Child and adolescent psychiatry leadership, early exposure to CAP, and CAP clinical experiences were related to student CAP interest, but these relationships were not statistically significant. The time allotted to teaching CAP in the undergraduate medical curriculum is minimal, consistent with previous survey results. Most schools require didactic instruction averaging about 12 hours and offer elective clinical opportunities. The survey findings should help direct future planning to improve CAP medical student education.
Exploration of warm-up period in conceptual hydrological modelling
NASA Astrophysics Data System (ADS)
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2018-01-01
One of the important issues in hydrological modelling is to specify the initial conditions of the catchment since it has a major impact on the response of the model. Although this issue should be a high priority among modelers, it has remained unaddressed by the community. The typical suggested warm-up period for the hydrological models has ranged from one to several years, which may lead to an underuse of data. The model warm-up is an adjustment process for the model to reach an 'optimal' state, where internal stores (e.g., soil moisture) move from the estimated initial condition to an 'optimal' state. This study explores the warm-up period of two conceptual hydrological models, HYMOD and IHACRES, in a southwestern England catchment. A series of hydrologic simulations were performed for different initial soil moisture conditions and different rainfall amounts to evaluate the sensitivity of the warm-up period. Evaluation of the results indicates that both initial wetness and rainfall amount affect the time required for model warm up, although it depends on the structure of the hydrological model. Approximately one and a half months are required for the model to warm up in HYMOD for our study catchment and climatic conditions. In addition, it requires less time to warm up under wetter initial conditions (i.e., saturated initial conditions). On the other hand, approximately six months is required for warm-up in IHACRES, and the wet or dry initial conditions have little effect on the warm-up period. Instead, the initial values that are close to the optimal value result in less warm-up time. These findings have implications for hydrologic model development, specifically in determining soil moisture initial conditions and warm-up periods to make full use of the available data, which is very important for catchments with short hydrological records.
Training Requirements for Visualizing Time and Space at Company and Platoon Level
2007-09-01
vignettes. Participants were given approximately 20 minutes to develop a concept of operations, using whiteboards or butcher paper as necessary (see Figure...was conducted based on workshops with active and retired military personnel (n = 50). The CTA used a representative scenario and supporting...throughout this research effort including design of the scenario and vignettes used in the workshops, participation in and facilitation of the workshops
Pulse power switch development
NASA Astrophysics Data System (ADS)
Harvey, R.; Gallagher, H.; Hansen, S.
1980-01-01
The objective of this study program has been to define an optimum technical approach to the longer range goal of achieving practical high repetition rate high power spark gap switches. Requirements and possible means of extending the state of the art of crossed field closing switches, vacuum spark gaps, and pressurized spark gaps are presented with emphasis on reliable, efficient and compact devices operable in burst mode at 250-300 kV, 40-60 kA, =1 kHz with approximately 50 nsec pulses rising in approximately 3 ns. Models of these devices are discussed which are based upon published and generated design data and on underlying physical principles. Based upon its relative advantages, limitations and tradeoffs we conclude that the Hughes Crossatron switch is the nearest term approach to reach the switch goal levels. Theoretical, experimental, and computer simulation models of the plasma show a collective ion acceleration mechanism to be active which is predicted to result in current rise times approaching 10 nsec. A preliminary design concept is presented. For faster rise times we have shown a vacuum surface flashover switch to be an interesting candidate. This device is limited by trigger instabilities and will require further basic development. The problem areas relevant to high pressure spark gaps are reviewed.
Numerical investigations in three-dimensional internal flows
NASA Astrophysics Data System (ADS)
Rose, William C.
1988-08-01
An investigation into the use of computational fluid dynamics (CFD) was performed to examine the expected heat transfer rates that will occur within the NASA-Ames 100 megawatt arc heater nozzle. This nozzle was tentatively designed and identified to provide research for a directly connected combustion experiment specifically related to the National Aerospace Plane Program (NASP) aircraft, and is expected to simulate the flow field entering the combustor section. It was found that extremely fine grids, that is very small mesh spacing near the wall, are required to accurately model the heat transfer process and, in fact, must contain a point within the laminar sublayer if results are to be taken directly from a numerical simulation code. In the present study, an alternative to this very fine mesh and its attendant increase in computational time was invoked and is based on a wall-function method. It was shown that solutions could be obtained that give accurate indications of surface heat transfer rate throughout the nozzle in approximately 1/100 of the computer time required to do the simulation directly without the use of the wall-function implementation. Finally, a maximum heating value in the throat region of the proposed slit nozzle for the 100 megawatt arc heater was shown to be approximately 6 MW per square meter.
76 FR 3680 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... requirements to provide customers with account information (approximately 683,969 hours) and requirements to update customer account information (approximately 777,436 hours). In addition, Rule 17a-3 contains... customers with account information, and costs for equipment and systems development. The Commission...
2010-07-23
approximately 142 ppm (0.0023 M), therefore approximately 23 mL of 0.100 M hydrochloric acid (HCl) acid is required per liter of seawater where Cl- is...deionized water to a total volume of 140 liters, and pH adjusted to 7.6 using hydrochloric acid (HCl); approximately 20 mLs of diluted HCl (5 mL of... hydrochloric acid was required to reduce pH in a 20 mL sample of Key West seawater to 6.0. This required 4.05E-05 moles of hydrogen ions. Based on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, Michael F.
2015-10-28
The time independent semiclassical treatment of barrier tunneling has been understood for a very long time. Several semiclassical approaches to time dependent tunneling through barriers have also been presented. These typically involve trajectories for which the position variable is a complex function of time. In this paper, a method is presented that uses only real valued trajectories, thus avoiding the complications that can arise when complex trajectories are employed. This is accomplished by expressing the time dependent wave packet as an integration over momentum. The action function in the exponent in this expression is expanded to second order in themore » momentum. The expansion is around the momentum, p{sub 0{sup *}}, at which the derivative of the real part of the action is zero. The resulting Gaussian integral is then taken. The stationary phase approximation requires that the derivative of the full action is zero at the expansion point, and this leads to a complex initial momentum and complex tunneling trajectories. The “pseudo-stationary phase” approximation employed in this work results in real values for the initial momentum and real valued trajectories. The transmission probabilities obtained are found to be in good agreement with exact quantum results.« less
Possible consequences of absence of "Jupiters" in planetary systems.
Wetherill, G W
1994-01-01
The formation of the gas giant planets Jupiter and Saturn probably required the growth of massive approximately 15 Earth-mass cores on a time scale shorter than the approximately 10(7) time scale for removal of nebular gas. Relatively minor variations in nebular parameters could preclude the growth of full-size gas giants even in systems in which the terrestrial planet region is similar to our own. Systems containing "failed Jupiters," resembling Uranus and Neptune in their failure to capture much nebular gas, would be expected to contain more densely populated cometary source regions. They will also eject a smaller number of comets into interstellar space. If systems of this kind were the norm, observation of hyperbolic comets would be unexpected. Monte Carlo calculations of the orbital evolution of region of such systems (the Kuiper belt) indicate that throughout Earth history the cometary impact flux in their terrestrial planet regions would be approximately 1000 times greater than in our Solar System. It may be speculated that this could frustrate the evolution of organisms that observe and seek to understand their planetary system. For this reason our observation of these planets in our Solar System may tell us nothing about the probability of similar gas giants occurring in other planetary systems. This situation can be corrected by observation of an unbiased sample of planetary systems.
A real-time approximate optimal guidance law for flight in a plane
NASA Technical Reports Server (NTRS)
Feeley, Timothy S.; Speyer, Jason L.
1990-01-01
A real-time guidance scheme is presented for the problem of maximizing the payload into orbit subject to the equations of motion of a rocket over a nonrotating spherical earth. The flight is constrained to a path in the equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of the problem can be separated into primary and perturbation effects by a small parameter, epsilon, which is the ratio of the atmospheric scale height to the radius of the earth. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form. The neglected perturbation terms are included in the higher-order terms of the expansion, which are determined from the solution of first-order linear partial differential equations requiring only integrations which are quadratures. The quadratures can be performed rapidly with emerging computer capability, so that real-time approximate optimization can be used to construct the launch guidance law. The application of this technique to flight in three-dimensions is made apparent from the solution presented.
NASA Astrophysics Data System (ADS)
Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons
2017-06-01
At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D; Fasenfest, B; Rieben, R
2006-09-08
We are concerned with the solution of time-dependent electromagnetic eddy current problems using a finite element formulation on three-dimensional unstructured meshes. We allow for multiple conducting regions, and our goal is to develop an efficient computational method that does not require a computational mesh of the air/vacuum regions. This requires a sophisticated global boundary condition specifying the total fields on the conductor boundaries. We propose a Biot-Savart law based volume-to-surface boundary condition to meet this requirement. This Biot-Savart approach is demonstrated to be very accurate. In addition, this approach can be accelerated via a low-rank QR approximation of the discretizedmore » Biot-Savart law.« less
Brusseau, M. L.; Hatton, J.; DiGuiseppi, W.
2011-01-01
The long-term impact of source-zone remediation efforts was assessed for a large site contaminated by trichloroethene. The impact of the remediation efforts (soil vapor extraction and in-situ chemical oxidation) was assessed through analysis of plume-scale contaminant mass discharge, which was measured using a high-resolution data set obtained from 23 years of operation of a large pump-and-treat system. The initial contaminant mass discharge peaked at approximately 7 kg/d, and then declined to approximately 2 kg/d. This latter value was sustained for several years prior to the initiation of source-zone remediation efforts. The contaminant mass discharge in 2010, measured several years after completion of the two source-zone remediation actions, was approximately 0.2 kg/d, which is ten times lower than the value prior to source-zone remediation. The time-continuous contaminant mass discharge data can be used to evaluate the impact of the source-zone remediation efforts on reducing the time required to operate the pump-and-treat system, and to estimate the cost savings associated with the decreased operational period. While significant reductions have been achieved, it is evident that the remediation efforts have not completely eliminated contaminant mass discharge and associated risk. Remaining contaminant mass contributing to the current mass discharge is hypothesized to comprise poorly-accessible mass in the source zones, as well as aqueous (and sorbed) mass present in the extensive lower-permeability units located within and adjacent to the contaminant plume. The fate of these sources is an issue of critical import to the remediation of chlorinated-solvent contaminated sites, and development of methods to address these sources will be required to achieve successful long-term management of such sites and to ultimately transition them to closure. PMID:22115080
Scalable Prediction of Energy Consumption using Incremental Time Series Clustering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Noor, Muhammad Usman
2013-10-09
Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less
Binarized cross-approximate entropy in crowdsensing environment.
Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana
2017-01-01
Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Manuel, Gerald; Lupták, Andrej; Corn, Robert M.
2017-01-01
A two-step templated, ribosomal biosynthesis/printing method for the fabrication of protein microarrays for surface plasmon resonance imaging (SPRI) measurements is demonstrated. In the first step, a sixteen component microarray of proteins is created in microwells by cell free on chip protein synthesis; each microwell contains both an in vitro transcription and translation (IVTT) solution and 350 femtomoles of a specific DNA template sequence that together are used to create approximately 40 picomoles of a specific hexahistidine-tagged protein. In the second step, the protein microwell array is used to contact print one or more protein microarrays onto nitrilotriacetic acid (NTA)-functionalized gold thin film SPRI chips for real-time SPRI surface bioaffinity adsorption measurements. Even though each microwell array element only contains approximately 40 picomoles of protein, the concentration is sufficiently high for the efficient bioaffinity adsorption and capture of the approximately 100 femtomoles of hexahistidine-tagged protein required to create each SPRI microarray element. As a first example, the protein biosynthesis process is verified with fluorescence imaging measurements of a microwell array containing His-tagged green fluorescent protein (GFP), yellow fluorescent protein (YFP) and mCherry (RFP), and then the fidelity of SPRI chips printed from this protein microwell array is ascertained by measuring the real-time adsorption of various antibodies specific to these three structurally related proteins. This greatly simplified two-step synthesis/printing fabrication methodology eliminates most of the handling, purification and processing steps normally required in the synthesis of multiple protein probes, and enables the rapid fabrication of SPRI protein microarrays from DNA templates for the study of protein-protein bioaffinity interactions. PMID:28706572
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Missile sizing for ascent-phase intercept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hull, D.G.; Salguero, D.E.
1994-11-01
A computer code has been developed to determine the size of a ground-launched, multistage missile which can intercept a theater ballistic missile before it leaves the atmosphere. Typical final conditions for the inteceptor are 450 km range, 60 km altitude, and 80 sec flight time. Given the payload mass (35 kg), which includes a kinetic kill vehicle, and achievable values for the stage mass fractions (0.85), the stage specific impulses (290 sec), and the vehicle density (60 lb/ft{sup 3}), the launch mass is minimized with respect to the stage payload mass ratios, the stage burn times, and the missile anglemore » of attack history subject to limits on the angle of attack (10 deg), the dynamic pressure (60,000 psf), and the maneuver load (200,000 psf deg). For a conical body, the minimum launch mass is approximately 1900 kg. The missile has three stages, and the payload coasts for 57 sec. A trade study has been performed by varying the flight time, the range, and the dynamic pressure Emits. With the results of a sizing study for a 70 lb payload and q{sub max} = 35,000 psf, a more detailed design has been carried out to determine heat shield mass, tabular aerodynamics, and altitude dependent thrust. The resulting missile has approximately 100 km less range than the sizing program predicted primarily because of the additional mass required for heat protection. On the other hand, launching the same missile from an aircraft increases its range by approximately 100 km. Sizing the interceptor for air launch with the same final conditions as the ground-launched missile reduces its launch mass to approximately 1000 kg.« less
Neural Network and Regression Soft Model Extended for PAX-300 Aircraft Engine
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2002-01-01
In fiscal year 2001, the neural network and regression capabilities of NASA Glenn Research Center's COMETBOARDS design optimization testbed were extended to generate approximate models for the PAX-300 aircraft engine. The analytical model of the engine is defined through nine variables: the fan efficiency factor, the low pressure of the compressor, the high pressure of the compressor, the high pressure of the turbine, the low pressure of the turbine, the operating pressure, and three critical temperatures (T(sub 4), T(sub vane), and T(sub metal)). Numerical Propulsion System Simulation (NPSS) calculations of the specific fuel consumption (TSFC), as a function of the variables can become time consuming, and numerical instabilities can occur during these design calculations. "Soft" models can alleviate both deficiencies. These approximate models are generated from a set of high-fidelity input-output pairs obtained from the NPSS code and a design of the experiment strategy. A neural network and a regression model with 45 weight factors were trained for the input/output pairs. Then, the trained models were validated through a comparison with the original NPSS code. Comparisons of TSFC versus the operating pressure and of TSFC versus the three temperatures (T(sub 4), T(sub vane), and T(sub metal)) are depicted in the figures. The overall performance was satisfactory for both the regression and the neural network model. The regression model required fewer calculations than the neural network model, and it produced marginally superior results. Training the approximate methods is time consuming. Once trained, the approximate methods generated the solution with only a trivial computational effort, reducing the solution time from hours to less than a minute.
Factors associated with breastfeeding initiation time in a baby-friendly hospital in Istanbul.
İnal, Sevil; Aydin, Yasemin; Canbulat, Nejla
2016-11-01
To investigate perinatal factors that affect breastfeeding of newborns delivered at a baby-friendly public hospital in Turkey, including the time of the first physical examination by a pediatrician, the first union with their mothers, and the first breastfeeding time after delivery. The research was conducted from May 2nd through June 30th, 2011, in a baby-friendly public hospital in Istanbul. The sample consisted of 194 mothers and their full-term newborns. The data were collected via an observation form developed by the researchers. In analyzing the data, the average, standard deviation, minimum, maximum values, Chi-square, and percentages were used. The results revealed that the first physical examinations of the newborns were performed approximately 53.02±39min (range, 1-180 min) after birth. The newborns were given to their mothers approximately 69.75±41min (range, 3-190 min) after birth. Consequently, the first initiated breastfeeding took place approximately 78.58±44min following birth, and active sucking was initiated after approximately 85.90±54min. A large percentage of the newborns (64.4%) were not examined by a specialist pediatrician within half an hour of birth, and 74.7% were not united with their mothers within the same period. Also, the newborns who initiated breastfeeding within the first half hour had significantly earlier success with active sucking and required significantly less assistance to achieve successful breastfeeding. The newborns in our study met with their mothers late in the birth ward because examinations of the newborns were delayed. The newborns began initial sucking later, and this chain reaction negatively impacted the breastfeeding success of the newborns. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Casadei, D.
2014-10-01
The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.
Precision of Sensitivity in the Design Optimization of Indeterminate Structures
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.
2006-01-01
Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.
NASA Astrophysics Data System (ADS)
Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina
2018-03-01
This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.
Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing
Bates, John B.
2003-04-29
Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.
Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing
Bates, John B.
2002-01-01
Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.
Fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing
Bates, John B.
2003-05-13
Systems and methods are described for fabrication of highly textured lithium cobalt oxide films by rapid thermal annealing. A method of forming a lithium cobalt oxide film includes depositing a film of lithium cobalt oxide on a substrate; rapidly heating the film of lithium cobalt oxide to a target temperature; and maintaining the film of lithium cobalt oxide at the target temperature for a target annealing time of at most, approximately 60 minutes. The systems and methods provide advantages because they require less time to implement and are, therefore less costly than previous techniques.
Real-time simulation of an F110/STOVL turbofan engine
NASA Technical Reports Server (NTRS)
Drummond, Colin K.; Ouzts, Peter J.
1989-01-01
A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.
Point-ahead limitation on reciprocity tracking. [in earth-space optical link
NASA Technical Reports Server (NTRS)
Shapiro, J. H.
1975-01-01
The average power received at a spacecraft from a reciprocity-tracking transmitter is shown to be the free-space diffraction-limited result times a gain-reduction factor that is due to the point-ahead requirement. For a constant-power transmitter, the gain-reduction factor is approximately equal to the appropriate spherical-wave mutual-coherence function. For a constant-average-power transmitter, an exact expression is obtained for the gain-reduction factor.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
Cryptographic Techniques for Privacy Preserving Identity
2011-05-13
information is often sufficient to match an individual to their pseudonym, for example, as in the case of the Netflix Prize movie rental dataset [71]. It was...shown that knowledge of only a couple approximate movie rental dates (as might be revealed by simply mentioning what one has watched recently) is...government censor may require Google or another popular blog host to reveal the login times of the top suspects, which could be correlated with the
Mode behavior in ultralarge ring lasers.
Hurst, Robert B; Dunn, Robert W; Schreiber, K Ulrich; Thirkettle, Robert J; MacDonald, Graeme K
2004-04-10
Contrary to expectations based on mode spacing, single-mode operation in very large He-Ne ring lasers may be achieved at intracavity power levels up to approximately0.15 times the saturation intensity for the He-Ne transition. Homogeneous line broadening at a high total gas pressure of 4-6 Torr allows a single-peaked gain profile that suppresses closely spaced multiple modes. At startup, decay of initial multiple modes may take tens of seconds. The single remaining mode in each direction persists metastably as the cavity is detuned by many times the mode frequency spacing. A theoretical explanation requires the gain profile to be concave down and to satisfy an inequality related to slope and saturation at the operating frequency. Calculated metastable frequency ranges are > 150 MHz at 6 Torr and depend strongly on pressure. Examples of unusual stable mode configurations are shown, with differently numbered modes in the two directions and with multiple modes at a spacing of approximately 100 MHz.
The distribution of stars most likely to harbor intelligent life.
Whitmire, Daniel P; Matese, John J
2009-09-01
Simple heuristic models and recent numerical simulations show that the probability of habitable planet formation increases with stellar mass. We combine those results with the distribution of main-sequence stellar masses to obtain the distribution of stars most likely to possess habitable planets as a function of stellar lifetime. We then impose the self-selection condition that intelligent observers can only find themselves around a star with a lifetime greater than the time required for that observer to have evolved, T(i). This allows us to obtain the stellar timescale number distribution for a given value of T(i). Our results show that for habitable planets with a civilization that evolved at time T(i) = 4.5 Gyr the median stellar lifetime is 13 Gyr, corresponding approximately to a stellar type of G5, with two-thirds of the stars having lifetimes between 7 and 30 Gyr, corresponding approximately to spectral types G0-K5. For other values of T(i) the median stellar lifetime changes by less than 50%.
Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography.
Zhang, Xuanxuan; Liu, Fei; Zuo, Simin; Shi, Junwei; Zhang, Guanglei; Bai, Jing; Luo, Jianwen
2015-01-01
Dynamic fluorescence molecular tomography (DFMT) is a potential approach for drug delivery, tumor detection, diagnosis, and staging. The purpose of DFMT is to quantify the changes of fluorescent agents in the bodies, which offer important information about the underlying physiological processes. However, the conventional method requires that the fluorophore concentrations to be reconstructed are stationary during the data collection period. As thus, it cannot offer the dynamic information of fluorophore concentration variation within the data collection period. In this paper, a method is proposed to reconstruct the fluorophore concentration variation instead of the fluorophore concentration through a linear approximation. The fluorophore concentration variation rate is introduced by the linear approximation as a new unknown term to be reconstructed and is used to obtain the time courses of fluorophore concentration. Simulation and phantom studies are performed to validate the proposed method. The results show that the method is able to reconstruct the fluorophore concentration variation rates and the time courses of fluorophore concentration with relative errors less than 0.0218.
GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering
Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka
2016-01-01
Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
Complete ISOPHOT (C200) Maps of a Nearby Prototypical GMC: W3 (Spring) or NGC7538 (Fall)
NASA Technical Reports Server (NTRS)
Sanders, David B.
2001-01-01
We were originally awarded Priority 3 time (approximately 60,000 sec) with Infrared Space Observatory (ISO) to obtain a complete ISOPHOT (PHT32-C200) map of a nearby prototypical giant molecular cloud (GMC). Following the FALL launch and revised estimates for the sensitivity of the ISOPHOT detectors, our program was modified to fit within the time constraints while still carrying out the main science requirements. The revised program requested long strip maps of our FALL target (NGC7538) using sequences of PHT37/38/39 observations with LWS observations of the brightest regions. The large number of AOTs required to cover each GMC required that our observations be spread over four separate proposals (PROP-01, PROP-02, PROP-03, PROP-04) which together comprise a single observing program. Our program was executed in early 1997; nearly 50,000 sec of data were obtained, including all of our requested ISOPHOT C200 observations. None of the LWS data were taken.
Fetterman, J Gregor; Killeen, P Richard
2010-09-01
Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoffnagle, John; Chen, Hongbing; Lee, Jim; Rella, Chris; Kim-Hak, David; Winkler, Renato; Markovic, Milos; Veres, Patrick
2017-04-01
Halogen radical species, such as chlorine and bromine atoms and their oxides, can greatly affect the chemical composition of the troposphere. Hydrogen chloride is the dominant (gas-phase) contributor to the tropospheric chlorine inventory. Real time in situ observations of HCl can provide an important window into the complex photochemical reaction pathways for chlorine in the atmosphere, including heterogeneous reactions on aerosol surfaces. In this work, we report a novel, commercially-available HCl gas-phase analyzer (G2108, Picarro Inc. Santa Clara, CA, USA) based upon Cavity Ring Down Spectroscopy (CRDS) in the near-infrared, and discuss its performance. With a measurement interval of approximately 2 seconds, a precision of better than 40 parts-per-trillion (1 sigma, 30 seconds), and a response time of approximately 1-2 minutes (10 - 90% rise time or 90 - 10% fall time), this analyzer is well-suited for measurements of atmospherically-relevant concentrations of HCl, in both laboratory and field. CRDS provides very stable measurements and low drift, requiring infrequent calibration of the instrument, and can therefore be operated remotely for extended periods of time. In this work we also present results from a laboratory intercomparison of the Picarro G2108 analyzer and an iodide ion time-of-flight Chemical Ionization Mass Spectrometer (CIMS), and the results of the analyzer time response tests.
Approximate optimal guidance for the advanced launch system
NASA Technical Reports Server (NTRS)
Feeley, T. S.; Speyer, J. L.
1993-01-01
A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.
A tunable hole-burning filter for lidar applications
NASA Astrophysics Data System (ADS)
Billmers, R. I.; Davis, J.; Squicciarini, M.
The fundamental physical principles for the development of a 'hole-burning' optical filter based on saturable absorption in dye-doped glasses are outlined. A model was developed to calculate the required pump intensity, throughput, and linewidth for this type of filter. Rhodamine 6G, operating at 532 nm, was found to require a 'warm-up' time of 110 pulses and a pump intensity of 100 kW/sq cm per pulse. The linewidth was calculated to be approximately 15 GHz at 77 K with a throughput of at least 25 percent and five orders of magnitude noise suppression. A 'hole-burning' filter offers significant advantages over current filter technology, including tunability over a 10-nm bandwidth, perfect wavelength and bandwidth matching to the transmitting laser in a pulsed lidar system, transform limited response times, and moderately high throughputs (at least 25 percent).
Spacelab Mission Implementation Cost Assessment (SMICA)
NASA Technical Reports Server (NTRS)
Guynes, B. V.
1984-01-01
A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.
Cold Atomic Hydrogen, Narrow Self-Absorption, and the Age of Molecular Clouds
NASA Technical Reports Server (NTRS)
Goldsmith, Paul F.
2006-01-01
This viewgraph presentation reviews the history, and current work on HI and its importance in star formation. Through many observations of HI Narrow Self Absorption (HINSA) the conclusions are drawn and presented. Local molecular clouds have HI well-mixed with molecular constituents This HI is cold, quiescent, and must be well-shielded from the UV radiation field The density and fractional abundance (wrt H2) of the cold HI are close to steady state values The time required to convert these starless clouds from purely HI initial state to observed present composition is a few to ten million years This timescale is a lower limit - if dense clouds being swept up from lower density regions by shocks, the time to accumulate material to get A(sub v) is approximately 1 and provide required shielding may be comparable or longer
Traffic shaping and scheduling for OBS-based IP/WDM backbones
NASA Astrophysics Data System (ADS)
Elhaddad, Mahmoud S.; Melhem, Rami G.; Znati, Taieb; Basak, Debashis
2003-10-01
We introduce Proactive Reservation-based Switching (PRS) -- a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.
Modeling pattern in collections of parameters
Link, W.A.
1999-01-01
Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.
Rapid Generation of Superheated Steam Using a Water-containing Porous Material
NASA Astrophysics Data System (ADS)
Mori, Shoji; Okuyama, Kunito
Heat treatment by superheated steam has been utilized in several industrial fields including sterilization, desiccation, and cooking. In particular, cooking by superheated steam is receiving increased attention because it has advantages of reducing the salt and fat contents in foods as well as suppressing the oxidation of vitamin C and fat. In this application, quick startup and cut-off responses are required. Most electrically energized steam generators require a relatively long time to generate superheated steam due to the large heat capacities of the water in container and of the heater. Zhao and Liao (2002) introduced a novel process for rapid vaporization of subcooled liquid, in which a low-thermal-conductivity porous wick containing water is heated by a downward-facing grooved heating block in contact with the upper surface of the wick structure. They showed that saturated steam is generated within approximately 30 seconds from room-temperature water at a heat flux 41.2 kW⁄m2. In order to quickly generate superheated steam of approximately 300°C, which is required for cooking, the heat capacity of the heater should be as small as possible and the imposed heat flux should be so high enough that the porous wick is able to dry out in the vicinity of the contact with the heater and that the resulting heater temperature becomes much higher than the saturation temperature. The present paper proposes a simple structured generator to quickly produce superheated steam. Only a fine wire heater is contacted spirally on the inside wall in a hollow porous material. The start-up, cut-off responses and the rate of energy conversion for input power are investigated experimentally. Superheated steam of 300°C is produced in approximately 19 seconds from room-temperature water for an input power of 300 W. The maximum rate of energy conversion in the steady state is approximately 0.9.
Mehr, Chelsea R; Gupta, Rajan; von Recklinghausen, Friedrich M; Szczepiorkowski, Zbigniew M; Dunbar, Nancy M
2013-06-01
Transfusion of plasma and red blood cell (RBC) units in a balanced ratio approximating 1:1 has been shown in retrospective studies to be associated with improved outcomes for trauma patients. Our low-volume rural trauma center uses a trauma-activated transfusion algorithm. Plasma is thawed upon activation to avoid wastage. However, the time required for plasma thawing has made achievement of a 1:1 ratio early in resuscitation challenging. In this study, the time required for plasma thawing is characterized, and a potential solution is proposed. A retrospective chart study of 38 moderately and massively transfused (≥6 U in the first 24 hours) trauma patients admitted from January 2008 to March 2012 was performed. We evaluated the time required to dispense plasma and the number of RBCs dispensed before plasma in these patients. The average time between the dispense of RBCs and plasma was 26 minutes (median, 28; range, 0-48 minutes). The average number of RBCs dispensed before plasma was 8 U (median, 7 U; range, 0-24 U). Nearly one third of massively transfused patients had 10 RBCs or greater dispensed before plasma was available. There exists the potential for delayed plasma availability owing to time required for thawing, which may compromise the ability to provide balanced plasma to RBC transfusion to trauma patients. Maintenance of a thawed Group AB plasma inventory may not be operationally feasible for rural centers with low trauma volumes. Use of a thawed Group A plasma inventory is a potential alternative to ensure rapid plasma availability. Therapeutic study, level V.
Middleton, Christopher P.; Senerchia, Natacha; Stein, Nils; Akhunov, Eduard D.; Keller, Beat
2014-01-01
Using Roche/454 technology, we sequenced the chloroplast genomes of 12 Triticeae species, including bread wheat, barley and rye, as well as the diploid progenitors and relatives of bread wheat Triticum urartu, Aegilops speltoides and Ae. tauschii. Two wild tetraploid taxa, Ae. cylindrica and Ae. geniculata, were also included. Additionally, we incorporated wild Einkorn wheat Triticum boeoticum and its domesticated form T. monococcum and two Hordeum spontaneum (wild barley) genotypes. Chloroplast genomes were used for overall sequence comparison, phylogenetic analysis and dating of divergence times. We estimate that barley diverged from rye and wheat approximately 8–9 million years ago (MYA). The genome donors of hexaploid wheat diverged between 2.1–2.9 MYA, while rye diverged from Triticum aestivum approximately 3–4 MYA, more recently than previously estimated. Interestingly, the A genome taxa T. boeoticum and T. urartu were estimated to have diverged approximately 570,000 years ago. As these two have a reproductive barrier, the divergence time estimate also provides an upper limit for the time required for the formation of a species boundary between the two. Furthermore, we conclusively show that the chloroplast genome of hexaploid wheat was contributed by the B genome donor and that this unknown species diverged from Ae. speltoides about 980,000 years ago. Additionally, sequence alignments identified a translocation of a chloroplast segment to the nuclear genome which is specific to the rye/wheat lineage. We propose the presented phylogeny and divergence time estimates as a reference framework for future studies on Triticeae. PMID:24614886
Centrifugal contactor operations for UREX process flowsheet. An update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereira, Candido; Vandegrift, George F.
2014-08-01
The uranium extraction (UREX) process separates uranium, technetium, and a fraction of the iodine from the other components of the irradiated fuel in nitric acid solution. In May 2012, the time, material, and footprint requirements for treatment of 260 L batches of a solution containing 130 g-U/L were evaluated for two commercial annular centrifugal contactors from CINC Industries. These calculated values were based on the expected volume and concentration of fuel arising from treatment of a single target solution vessel (TSV). The general conclusions of that report were that a CINC V-2 contactor would occupy a footprint of 3.2 mmore » 2 (0.25 m x 15 m) if each stage required twice the nominal footprint of an individual stage, and approximately 1,131 minutes or nearly 19 hours is required to process all of the feed solution. A CINC V-5 would require approximately 9.9 m 2 (0.4 m x 25 m) of floor space but would require only 182 minutes or ~ 3 hours to process the spent target solution. Subsequent comparison with the Modular Caustic Side Solvent Extraction Unit (MCU) at Savannah River Site (SRS) in October 2013 suggested that a more compact arrangement is feasible, and the linear dimension for the CINC V-5 may be reduced to about 8 m; a comparable reduction for the CINC V-2 yields a length of 5 m. That report also described an intermediate-scale (10 cm) contactor design developed by Argonne in the early 1980s that would better align with the SHINE operations as they stood in May 2012. In this report, we revisit the previous evaluation of contactor operations after discussions with CINC Industries and analysis of the SHINE process flow diagrams for the cleanup of the TSV, which were not available at the time of the first assessment.« less
Enhanced hypervelocity launcher: Capabilities to 16 km/s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhabildas, L.C.; Kmetyk, L.N.; Reinhart, W.D.
1993-12-31
A systematic study is described which has led to the successful launch of thin flier plates to velocities of 16 km/s. The energy required to launch a flier plant to 16 km/s is approximately 10 to 15 times the energy required to melt and vaporize the plate. The energy must, therefore, be deposited in a well-controlled manner to prevent melt or vaporation. This is achieved by using a graded-density assembly to impact a stationary flier-plate upon impact time dependent, structure, high pressure pulses are generated and used to propel the plantes plates to hypervelocities without melt or fracture. In previousmore » studies, a graded density impact of 7.3 km/s was used to launch a 0.5 mm thick plate to a velocity of over 12 km/s. If impact techniques alone were to be used to achieve flier-plate velocities approaching 16 km/s, this would require that the graded-density impact occur at {approximately} 10 km/s. In this paper, we describe a new technique that has been implemented to enhance the performance of the Sandia hypervelocity launcher. This technique of creating an impact-generated acceleration reservoir, has allowed the launch of 0.5 mm to 1.0 mm thick plates to record velocities up to 15.8 km/s. In these experiments, both titanium (Ti-6A1-4V) and aluminum (6061-T6) alloy were used for the flier-plate material. These are the highest metallic projectile plate velocities ever achieved for masses in the range of 0.1 g to 1 g.« less
QPROP: A Schrödinger-solver for intense laser atom interaction
NASA Astrophysics Data System (ADS)
Bauer, Dieter; Koval, Peter
2006-03-01
The QPROP package is presented. QPROP has been developed to study laser-atom interaction in the nonperturbative regime where nonlinear phenomena such as above-threshold ionization, high order harmonic generation, and dynamic stabilization are known to occur. In the nonrelativistic regime and within the single active electron approximation, these phenomena can be studied with QPROP in the most rigorous way by solving the time-dependent Schrödinger equation in three spatial dimensions. Because QPROP is optimized for the study of quantum systems that are spherically symmetric in their initial, unperturbed configuration, all wavefunctions are expanded in spherical harmonics. Time-propagation of the wavefunctions is performed using a split-operator approach. Photoelectron spectra are calculated employing a window-operator technique. Besides the solution of the time-dependent Schrödinger equation in single active electron approximation, QPROP allows to study many-electron systems via the solution of the time-dependent Kohn-Sham equations. Program summaryProgram title:QPROP Catalogue number:ADXB Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXB Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer on which program has been tested:PC Pentium IV, Athlon Operating system:Linux Program language used:C++ Memory required to execute with typical data:Memory requirements depend on the number of propagated orbitals and on the size of the orbitals. For instance, time-propagation of a hydrogenic wavefunction in the perturbative regime requires about 64 KB RAM (4 radial orbitals with 1000 grid points). Propagation in the strongly nonperturbative regime providing energy spectra up to high energies may need 60 radial orbitals, each with 30000 grid points, i.e. about 30 MB. Examples are given in the article. No. of bits in a word:Real and complex valued numbers of double precision are used No. of lines in distributed program, including test data, etc.:69 995 No. of bytes in distributed program, including test data, etc.: 2 927 567 Peripheral used:Disk for input-output, terminal for interaction with the user CPU time required to execute test data:Execution time depends on the size of the propagated orbitals and the number of time-steps Distribution format:tar.gz Nature of the physical problem:Atoms put into the strong field of modern lasers display a wealth of novel phenomena that are not accessible to conventional perturbation theory where the external field is considered small as compared to inneratomic forces. Hence, the full ab initio solution of the time-dependent Schrödinger equation is desirable but in full dimensionality only feasible for no more than two (active) electrons. If many-electron effects come into play or effective ground state potentials are needed, (time-dependent) density functional theory may be employed. QPROP aims at providing tools for (i) the time-propagation of the wavefunction according to the time-dependent Schrödinger equation, (ii) the time-propagation of Kohn-Sham orbitals according to the time-dependent Kohn-Sham equations, and (iii) the energy-analysis of the final one-electron wavefunction (or the Kohn-Sham orbitals). Method of solution:An expansion of the wavefunction in spherical harmonics leads to a coupled set of equations for the radial wavefunctions. These radial wavefunctions are propagated using a split-operator technique and the Crank-Nicolson approximation for the short-time propagator. The initial ground state is obtained via imaginary time-propagation for spherically symmetric (but otherwise arbitrary) effective potentials. Excited states can be obtained through the combination of imaginary time-propagation and orthogonalization. For the Kohn-Sham scheme a multipole expansion of the effective potential is employed. Wavefunctions can be analyzed using the window-operator technique, facilitating the calculation of electron spectra, either angular-resolved or integrated Restrictions onto the complexity of the problem:The coupling of the atom to the external field is treated in dipole approximation. The time-dependent Schrödinger solver is restricted to the treatment of a single active electron. As concerns the time-dependent density functional mode of QPROP, the Hartree-potential (accounting for the classical electron-electron repulsion) is expanded up to the quadrupole. Only the monopole term of the Krieger-Li-Iafrate exchange potential is currently implemented. As in any nontrivial optimization problem, convergence to the optimal many-electron state (i.e. the ground state) is not automatically guaranteed External routines/libraries used:The program uses the well established libraries BLAS, LAPACK, and F2C
Manned Mars flyby mission and configuration concept
NASA Technical Reports Server (NTRS)
Young, Archie; Meredith, Ollie; Brothers, Bobby
1986-01-01
A concept is presented for a flyby mission of the planet. The mission was sized for the 2001 time period, has a crew of three, uses all propulsive maneuvers, and requires 442 days. Such a flyby mission results in significantly smaller vehicles than would a landing mission, but of course loses the value of the landing and the associated knowledge and prestige. Stay time in the planet vicinity is limited to the swingby trajectory but considerable time still exists for enroute science and research experiments. All propulsive braking was used in the concept due to unacceptable g-levels associated with aerobraking on this trajectory. LEO departure weight for the concept is approximately 594,000 pounds.
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
Effect of design selection on response surface performance
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1993-01-01
The mathematical formulation of the engineering optimization problem is given. Evaluation of the objective function and constraint equations can be very expensive in a computational sense. Thus, it is desirable to use as few evaluations as possible in obtaining its solution. In solving the equation, one approach is to develop approximations to the objective function and/or restraint equations and then to solve the equation using the approximations in place of the original functions. These approximations are referred to as response surfaces. The desirability of using response surfaces depends upon the number of functional evaluations required to build the response surfaces compared to the number required in the direct solution of the equation without approximations. The present study is concerned with evaluating the performance of response surfaces so that a decision can be made as to their effectiveness in optimization applications. In particular, this study focuses on how the quality of approximations is effected by design selection. Polynomial approximations and neural net approximations are considered.
Foltin, R W; Rolls, B J; Moran, T H; Kelly, T H; McNelis, A L; Fischman, M W
1992-02-01
Six subjects participated in a residential study assessing the effects of covert macronutrient and energy manipulations during three required-eating occasions (breakfast, lunch, and afternoon snack) on total macronutrient and energy intakes. Overall, energy content of the occasions varied between approximately 3000 and approximately 7000 kJ (approximately 700 and approximately 1700 kcal) with the majority of the differential derived from either fat or carbohydrate (CHO). Each condition (high, medium, and low fat; high, medium, and low CHO; and no required eating) was examined for 2 d. Subjects compensated for the energy content of the required occasions such that only under the low-CHO condition (11,297 +/- 3314 kJ) was total daily energy intake lower than that observed in the absence of required occasions (13,297 +/- 1356 kJ). Only total energy intake under the high-fat condition (12,326 +/- 2548 kJ) was significantly different from its matched CHO condition (high-CHO condition: 14,665 +/- 2686 kJ). In contrast to the clear evidence for caloric compensation, there were no differential effects of condition on macronutrient intake, ie, there was no macronutrient compensation.
Temporal resolution improvement using PICCS in MDCT cardiac imaging.
Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang
2009-06-01
The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications.
Screening and Spectral Summing of LANL Empty Waste Drums - 13226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruetzmacher, Kathleen M.; Bustos, Roland M.; Ferran, Scott G.
2013-07-01
Empty 55-gallon drums that formerly held transuranic (TRU) waste (often over-packed in 85- gallon drums) are generated at LANL and require radiological characterization for disposition. These drums are typically measured and analyzed individually using high purity germanium (HPGe) gamma detectors. This approach can be resource and time intensive. For a project requiring several hundred drums to be characterized in a short time frame, an alternative approach was developed. The approach utilizes a combination of field screening and spectral summing that was required to be technically defensible and meet the Nevada Nuclear Security Site (NNSS) Waste Acceptance Criteria (WAC). In themore » screening phase of the operation, the drums were counted for 300 seconds (compared to 600 seconds for the typical approach) and checked against Low Level (LL)/TRU thresholds established for each drum configuration and detector. Multiple TRU nuclides and multiple gamma rays for each nuclide were evaluated using an automated spreadsheet utility that can process data from up to 42 drums at a time. Screening results were reviewed by an expert analyst to confirm the field LL/TRU determination. The spectral summing analysis technique combines spectral data (channel-by-channel) associated with a group of individual waste containers producing a composite spectrum. The grouped drums must meet specific similarity criteria. Another automated spreadsheet utility was used to spectral sum data from an unlimited number of similar drums grouped together. The composite spectrum represents a virtual combined drum for the group of drums and was analyzed using the SNAP{sup TM}/Radioassay Data Sheet (RDS)/Batch Data Report (BDR) method. The activity results for a composite virtual drum were divided equally amongst the individual drums to generate characterization results for each individual drum in the group. An initial batch of approximately 500 drums were measured and analyzed in less than 2 months in 2011. A second batch of approximately 500 more drums were measured and analyzed during the following 2 1/2 months. Four different HPGe detectors were employed for the operation. The screening and spectral summing approach can reduce the overall measurement and analysis time required. However, developing the technical details and automation spreadsheets requires a significant amount of expert time prior to beginning field operations and must be considered in the overall project schedule. This approach has continued to be used for characterizing several hundred more empty drums in 2012 and is planned to continue in 2013. (authors)« less
The convergence rate of approximate solutions for nonlinear scalar conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1991-01-01
The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.
Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973
Westfall, Arthur O.
1976-01-01
A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.
Femtosecond/picosecond time-resolved fluorescence study of hydrophilic polymer fine particles.
Nanjo, Daisuke; Hosoi, Haruko; Fujino, Tatsuya; Tahara, Tahei; Korenaga, Takashi
2007-03-22
Femtosecond/picosecond time-resolved fluorescence study of hydrophilic polymer fine particles (polyacrylamide, PAAm) was reported. Ultrafast fluorescence dynamics of polymer/water solution was monitored using a fluorescent probe molecule (C153). In the femtosecond time-resolved fluorescence measurement at 480 nm, slowly decay components having lifetimes of tau(1) approximately 53 ps and tau(2) approximately 5 ns were observed in addition to rapid fluorescence decay. Picosecond time-resolved fluorescence spectra of C153/PAAm/H2O solution were also measured. In the time-resolved fluorescence spectra of C153/PAAm/H2O, a peak shift from 490 to 515 nm was measured, which can be assigned to the solvation dynamics of polymer fine particles. The fluorescence peak shift was related to the solvation response function and two time constants were determined (tau(3) approximately 50 ps and tau(4) approximately 467 ps). Therefore, the tau(1) component observed in the femtosecond time-resolved fluorescence measurement was assigned to the solvation dynamics that was observed only in the presence of polymer fine particles. Rotational diffusion measurements were also carried out on the basis of the picosecond time-resolved fluorescence spectra. In the C153/PAAm/H2O solution, anisotropy decay having two different time constants was also derived (tau(6) approximately 76 ps and tau(7) approximately 676 ps), indicating the presence of two different microscopic molecular environments around the polymer surface. Using the Stokes-Einstein-Debye (SED) equation, microscopic viscosity around the polymer surface was evaluated. For the area that gave a rotational diffusion time of tau(6) approximately 76 ps, the calculated viscosity is approximately 1.1 cP and for tau(7) approximately 676 ps, it is approximately 10 cP. The calculated viscosity values clearly revealed that there are two different molecular environments around the polyacrylamide fine particles.
NASA Astrophysics Data System (ADS)
Hermanns, S.; Balzer, K.; Bonitz, M.
2013-03-01
The nonequilibrium description of quantum systems requires, for more than two or three particles, the use of a reduced description to be numerically tractable. Two possible approaches are based on either reduced density matrices or nonequilibrium Green functions (NEGF). Both concepts are formulated in terms of hierarchies of coupled equations—the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy for the reduced density operators and the Martin-Schwinger-hierarchy (MS) for the Green functions, respectively. In both cases, similar approximations are introduced to decouple the hierarchy, yet still many questions regarding the correspondence of both approaches remain open. Here we analyze this correspondence by studying the generalized Kadanoff-Baym ansatz (GKBA) that reduces the NEGF to a single-time theory. Starting from the BBGKY-hierarchy we present the approximations that are necessary to recover the GKBA result both, with Hartree-Fock propagators (HF-GKBA) and propagators in second Born approximation. To test the quality of the HF-GKBA, we study the dynamics of a 4-electron Hubbard nanocluster starting from a strong nonequilibrium initial state and compare to exact results and the Wang-Cassing approximation to the BBGKY hierarchy presented recently by Akbari et al. [1].
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
The carbohydrate maintenance properties of an experimental sports drink.
White, J. A.; Ford, M. A.
1984-01-01
The effects of an experimental sports drink (Q) were compared with a commercial sports drink (D) of proven ergogenic efficacy. Seven highly trained subjects performed two hours of cycle ergometry exercise at approximately 65% maximal aerobic power (VO2 max) while receiving levels of Q and D in quantities designed to supply approximately 28% of the total energy requirement of the exercise task. Both Q and D formulations were supplied at 15 minute intervals at 16 degrees C, in volumes required to provide equivalent carbohydrate loads from two products of differing concentrations and compositions. Q was equally as effective as D in terms of the maintenance of plasma glucose concentrations during exercise, while selected physiological indices of work performance favoured Q. However, the time course of plasma glucose concentration changes during and after exercise indicated a trend towards more rapid uptake and assimilation of carbohydrate in the case of Q. The findings suggest that Q may provide a more readily available carbohydrate source during exercise and may enhance work performance through its ergogenic properties. Images p64-a p64-b PMID:6466932
Lee, Yong Ju; Jung, Byeong Su; Kim, Kee-Tae; Paik, Hyun-Dong
2015-09-01
A predictive model was performed to describe the growth of Staphylococcus aureus in raw pork by using Integrated Pathogen Modeling Program 2013 and a polynomial model as a secondary predictive model. S. aureus requires approximately 180 h to reach 5-6 log CFU/g at 10 °C. At 15 °C and 25 °C, approximately 48 and 20 h, respectively, are required to cause food poisoning. Predicted data using the Gompertz model was the most accurate in this study. For lag time (LT) model, bias factor (Bf) and accuracy factor (Af) values were both 1.014, showing that the predictions were within a reliable range. For specific growth rate (SGR) model, Bf and Af were 1.188 and 1.190, respectively. Additionally, both Bf and Af values of the LT and SGR models were close to 1, indicating that IPMP Gompertz model is more adequate for predicting the growth of S. aureus on raw pork than other models. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
Molecular dissection of botulinum neurotoxin reveals interdomain chaperone function.
Fischer, Audrey; Montal, Mauricio
2013-12-01
Clostridium botulinum neurotoxin (BoNT) is a multi-domain protein made up of the approximately 100 kDa heavy chain (HC) and the approximately 50 kDa light chain (LC). The HC can be further subdivided into two halves: the N-terminal translocation domain (TD) and the C-terminal Receptor Binding Domain (RBD). We have investigated the minimal requirements for channel activity and LC translocation. We utilize a cellular protection assay and a single channel/single molecule LC translocation assay to characterize in real time the channel and chaperone activities of BoNT/A truncation constructs in Neuro 2A cells. The unstructured, elongated belt region of the TD is demonstrated to be dispensable for channel activity, although may be required for productive LC translocation. We show that the RBD is not necessary for channel activity or LC translocation, however it dictates the pH threshold of channel insertion into the membrane. These findings indicate that each domain functions as a chaperone for the others in addition to their individual functions, working in concert to achieve productive intoxication. Copyright © 2013 Elsevier Ltd. All rights reserved.
High temperature thermal energy storage in steel and sand
NASA Technical Reports Server (NTRS)
Turner, R. H.
1979-01-01
The technical and economic potential for high temperature (343 C, 650 F) thermal energy storage in hollow steel ingots, pipes embedded in concrete, and for pipes buried in sand was evaluated. Because it was determined that concrete would separate from pipes due to thermal stresses, concrete was replaced by sand, which is free from thermal stresses. Variations of the steel ingot concept were not cost effective compared to the sand-pipe approach, therefore, the sand-pipe thermal storage unit (TSU) was evaluated in depth to assess the approximate tube spacing requirements consistent with different system performance characteristics and also attendant system costs. For large TSUs which do not require fast response times, the sand-pipe approach offers attractive possibilities. A pipe diameter about 9 cm (3.5 in) and pipe spacing of approximately 25 cm (10 in), with sand filling the interspaces, appears appropriate. Such a TSU system designed for 8 hours charge/discharge cycle has an energy unit storage cost (CE) of $2.63/kWhr-t and a power unit storage cost (Cp) of $42/kW-t (in 1977 dollars).
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
NASA Technical Reports Server (NTRS)
Ha, Kong Q.; Femiano, Michael D.; Mosier, Gary E.
2004-01-01
In this paper, we present an optimal open-loop slew trajectory algorithm developed at GSFC for the so-called "Yardstick design" of the James Webb Space Telescope (JWST). JWST is an orbiting infrared observatory featuring a lightweight, segmented primary mirror approximately 6 meters in diameter and a sunshield approximately the size of a tennis court. This large, flexible structure will have significant number of lightly damped, dominant flexible modes. With very stringent requirements on pointing accuracy and image quality, it is important that slewing be done within the required time constraint and with minimal induced vibration in order to maximize observing efficiency. With reaction wheels as control actuators, initial wheel speeds as well as individual wheel torque and momentum limits become dominant constraints in slew performance. These constraints must be taken into account when performing slews to ensure that unexpected reaction wheel saturation does not occur, since such saturation leads to control failure in accurately tracking commanded motion and produces high frequency torque components capable of exciting structural modes. A minimum-time constraint is also included and coupled with reaction wheel limit constraints in the optimization to minimize both the effect of the control torque on the flexible body motion and the maneuver time. The optimization is on slew command parameters, such as maximum slew velocity and acceleration, for a given redundant reaction wheel configuration and is based on the dynamic interaction between the spacecraft and reaction wheel motion. Analytical development of the slew algorithm to generate desired slew position, rate, and acceleration profiles to command a feedback/feed forward control system is described. High-fidelity simulation and experimental results are presented to show that the developed slew law achieves the objectives.
Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations
NASA Astrophysics Data System (ADS)
Padrino, Juan C.; Zhang, Duan Z.
2016-11-01
The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.
Fabricatore, Anthony N; Sarwer, David B; Wadden, Thomas A; Combs, Christopher J; Krasucki, Jennifer L
2007-09-01
Many bariatric surgery programs require that candidates undergo a preoperative mental health evaluation. Candidates may be motivated to suppress or exaggerate psychiatric symptoms (i.e., engage in impression management), if they believe doing so will enhance their chances of receiving a recommendation to proceed with surgery. 237 candidates for bariatric surgery completed the Beck Depression Inventory-II (BDI-ll) as part of their preoperative psychological evaluation (Time 1). They also completed the BDI-II approximately 2-4 weeks later, for research purposes, after they had received the mental health professional's unconditional recommendation to proceed with surgery (Time 2). There was a small but statistically significant increase in mean BDI-II scores from Time 1 to Time 2 (11.4 vs 12.7, P<.001). Clinically significant changes, defined as a change from one range of symptom severity to another, were observed in 31.2% of participants, with significant increases in symptoms occurring nearly twice as often as reductions (20.7% vs 10.5%, P<.008). Demographic variables were largely unrelated to changes in BDI-II scores from Time 1 to Time 2. Approximately one-third of bariatric surgery candidates reported a clinically significant change in depressive symptoms after receiving psychological "clearance" for surgery. Possible explanations for these findings include measurement error, impression management, and true changes in psychiatric status.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Pérez-Hernández, Guillermo; Noé, Frank
2016-12-13
Analysis of molecular dynamics, for example using Markov models, often requires the identification of order parameters that are good indicators of the rare events, i.e. good reaction coordinates. Recently, it has been shown that the time-lagged independent component analysis (TICA) finds the linear combinations of input coordinates that optimally represent the slow kinetic modes and may serve in order to define reaction coordinates between the metastable states of the molecular system. A limitation of the method is that both computing time and memory requirements scale with the square of the number of input features. For large protein systems, this exacerbates the use of extensive feature sets such as the distances between all pairs of residues or even heavy atoms. Here we derive a hierarchical TICA (hTICA) method that approximates the full TICA solution by a hierarchical, divide-and-conquer calculation. By using hTICA on distances between heavy atoms we identify previously unknown relaxation processes in the bovine pancreatic trypsin inhibitor.
Evaluation of the eigenvalue method in the solution of transient heat conduction problems
NASA Astrophysics Data System (ADS)
Landry, D. W.
1985-01-01
The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Regulation of substrate use during the marathon.
Spriet, Lawrence L
2007-01-01
The energy required to run a marathon is mainly provided through oxidative phosphorylation in the mitochondria of the active muscles. Small amounts of energy from substrate phosphorylation are also required during transitions and short periods when running speed is increased. The three inputs for adenosine triphosphate production in the mitochondria include oxygen, free adenosine diphosphate and inorganic phosphate, and reducing equivalents. The reducing equivalents are derived from the metabolism of fat and carbohydrate (CHO), which are mobilised from intramuscular stores and also delivered from adipose tissue and liver, respectively. The metabolism of fat and CHO is tightly controlled at several regulatory sites during marathon running. Slower, recreational runners run at 60-65% maximal oxygen uptake (VO(2max)) for approximately 3:45:00 and faster athletes run at 70-75% for approximately 2:45:00. Both groups rely heavily on fat and CHO fuels. However, elite athletes run marathons at speeds requiring between 80% and 90% VO(2max), and finish in times between 2:05:00 and 2:20:00. They are highly adapted to oxidise fat and must do so during training. However, they compete at such high running speeds, that CHO oxidation (also highly adapted) may be the exclusive source of energy while racing. Further work with elite athletes is needed to examine this possibility.
NASA Technical Reports Server (NTRS)
Ghandi, P.; Annuar, A.; Lansbury, G. B.; Stern, D.; Alexander, D. M.; Bauer, F. E.; Bianchi, S.; Boggs, S. E.; Boorman, P. G.; Brandt, W. N.;
2017-01-01
We present NuSTAR X-ray observations of the active galactic nucleus (AGN) in NGC7674.The source shows a flat X-ray spectrum, suggesting that it is obscured by Compton-thick gas columns. Based upon long-term flux dimming, previous work suggested the alternate possibility that the source is a recently switched-off AGN with the observed X-rays being the lagged echo from the torus. Our high-quality data show the source to be reflection-dominated in hard X-rays, but with a relatively weak neutral Fe K(alpha) emission line (equivalent width [EW] of approximately 0.4 keV) and a strong Fe XXVI ionized line (EW approximately 0.2 keV).We construct an updated long-term X-ray light curve of NGC7674 and find that the observed 2-10 keV flux has remained constant for the past approximately 20 yr, following a high-flux state probed by Ginga. Light travel time arguments constrain the minimum radius of the reflector to be approximately 3.2 pc under the switched-off AGN scenario, approximately 30 times larger than the expected dust sublimation radius, rendering this possibility unlikely. A patchy Compton-thick AGN (CTAGN) solution is plausible, requiring a minimum line-of-sight column density (N(sub H)) of 3 x 10(exp 24) cm(exp -2) at present, and yields an intrinsic 2-10 keV luminosity of (3-5) x 10(exp 43) erg s(exp -1). Realistic uncertainties span the range of approximately (1-13) x 10(exp 43) erg s1. The source has one of the weakest fluorescence lines amongst bona fide CTAGN, and is potentially a local analogue of bolometrically luminous systems showing complex neutral and ionized Fe emission. It exemplifies the difficulty of identification and proper characterization of distant CTAGN based on the strength of the neutral Fe K line
Time-optimal control with finite bandwidth
NASA Astrophysics Data System (ADS)
Hirose, M.; Cappellaro, P.
2018-04-01
Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.
Galactic archaeology in action space
NASA Astrophysics Data System (ADS)
Sanderson, Robyn
2009-05-01
Working in action space offers an instructive alternative view of the process of hierarchical assembly in galaxies, but performing the necessary canonical transformation formally requires both complete phase space information of a stellar population and knowledge of the correct galactic potential, neither of which is generally available. I use the approximate-action method pioneered by MacMillan and Binney (2008) to examine the remnant of a late-time merger in M31, which was modeled by Fardal et al. (2007).
2015-09-01
million cells each. These 4 canard meshes were then overset with the 10 background projectile body mesh using the Chimera procedure.29 The final... Chimera -overlapped mesh for each of the 2 (fin cant) models consists of approximately 43 million cells. A circumferential cross section (Fig. 4... Chimera procedure requires proper transfer of information between the background mesh and the canard meshes at every time step. However, the advantage
Kopp, Robert E; Kirschvink, Joseph L; Hilburn, Isaac A; Nash, Cody Z
2005-08-09
Although biomarker, trace element, and isotopic evidence have been used to claim that oxygenic photosynthesis evolved by 2.8 giga-annum before present (Ga) and perhaps as early as 3.7 Ga, a skeptical examination raises considerable doubt about the presence of oxygen producers at these times. Geological features suggestive of oxygen, such as red beds, lateritic paleosols, and the return of sedimentary sulfate deposits after a approximately 900-million year hiatus, occur shortly before the approximately 2.3-2.2 Ga Makganyene "snowball Earth" (global glaciation). The massive deposition of Mn, which has a high redox potential, practically requires the presence of environmental oxygen after the snowball. New age constraints from the Transvaal Supergroup of South Africa suggest that all three glaciations in the Huronian Supergroup of Canada predate the Snowball event. A simple cyanobacterial growth model incorporating the range of C, Fe, and P fluxes expected during a partial glaciation in an anoxic world with high-Fe oceans indicates that oxygenic photosynthesis could have destroyed a methane greenhouse and triggered a snowball event on time-scales as short as 1 million years. As the geological evidence requiring oxygen does not appear during the Pongola glaciation at 2.9 Ga or during the Huronian glaciations, we argue that oxygenic cyanobacteria evolved and radiated shortly before the Makganyene snowball.
Classification of rollovers according to crash severity.
Digges, K; Eigen, A
2006-01-01
NASS/CDS 1995-2004 was used to classify rollovers according to severity. The rollovers were partitioned into two classes - rollover as the first event and rollover preceded by an impact with a fixed or non-fixed object. The populations of belted and unbelted were examined separately and combined. The average injury rate for the unbelted was five times that for the belted. Approximately 21% of the severe injuries suffered by belted occupants were in crashes with harmful events prior to the rollover that produced severe damage to the vehicle. This group carried a much higher injury risk than the average. A planar damage measure in addition to the rollover measure was required to adequately capture the crash severity of this population. For rollovers as the first event, approximately 1% of the serious injuries to belted occupants occurred during the first quarter-turn. Rollovers that were arrested during the 1 ( st ) quarter-turn carried a higher injury rate than average. The number of quarter-turns were grouped in various ways including the number of times the vehicle roof faces the ground (number of vehicle inversions). The number of vehicle inversions was found to be a statistically significant injury predictor for 78% of the belted and unbelted populations with MAIS 3+F injuries in rollovers. The remaining 22% required crash severity metrics in addition to the number of vehicle inversions.
Classification of Rollovers According to Crash Severity
Digges, K.; Eigen, A.
2006-01-01
NASS/CDS 1995–2004 was used to classify rollovers according to severity. The rollovers were partitioned into two classes – rollover as the first event and rollover preceded by an impact with a fixed or non-fixed object. The populations of belted and unbelted were examined separately and combined. The average injury rate for the unbelted was five times that for the belted. Approximately 21% of the severe injuries suffered by belted occupants were in crashes with harmful events prior to the rollover that produced severe damage to the vehicle. This group carried a much higher injury risk than the average. A planar damage measure in addition to the rollover measure was required to adequately capture the crash severity of this population. For rollovers as the first event, approximately 1% of the serious injuries to belted occupants occurred during the first quarter-turn. Rollovers that were arrested during the 1st quarter-turn carried a higher injury rate than average. The number of quarter-turns were grouped in various ways including the number of times the vehicle roof faces the ground (number of vehicle inversions). The number of vehicle inversions was found to be a statistically significant injury predictor for 78% of the belted and unbelted populations with MAIS 3+F injuries in rollovers. The remaining 22% required crash severity metrics in addition to the number of vehicle inversions. PMID:16968634
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1990-01-01
We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.
Resource Limitation Issues In Real-Time Intelligent Systems
NASA Astrophysics Data System (ADS)
Green, Peter E.
1986-03-01
This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Accuracy of theory for calculating electron impact ionization of molecules
NASA Astrophysics Data System (ADS)
Chaluvadi, Hari Hara Kumar
The study of electron impact single ionization of atoms and molecules has provided valuable information about fundamental collisions. The most detailed information is obtained from triple differential cross sections (TDCS) in which the energy and momentum of all three final state particles are determined. These cross sections are much more difficult for theory since the detailed kinematics of the experiment become important. There are many theoretical approximations for ionization of molecules. One of the successful methods is the molecular 3-body distorted wave (M3DW) approximation. One of the strengths of the DW approximation is that it can be applied for any energy and any size molecule. One of the approximations that has been made to significantly reduce the required computer time is the OAMO (orientation averaged molecular orbital) approximation. In this dissertation, the accuracy of the M3DW-OAMO is tested for different molecules. Surprisingly, the M3DW-OAMO approximation yields reasonably good agreement with experiment for ionization of H2 and N2. On the other hand, the M3DW-OAMO results for ionization of CH4, NH3 and DNA derivative molecules did not agree very well with experiment. Consequently, we proposed the M3DW with a proper average (PA) calculation. In this dissertation, it is shown that the M3DW-PA calculations for CH4 and SF6 are in much better agreement with experimental data than the M3DW-OAMO results.
[Transfusional requirements for escharectomy in burned children].
Julia, Analía R; Basílico, Hugo; Magaldi, Gustavo; Demirdjian, Graciela
2010-02-01
Early excision has considerably improved outcome in extensive burns, but massive resections usually mean copious bleeding that must be conveniently corrected. The purpose of this study was to measure blood component use during escharectomies in children. All pediatric patients with acute burns excised at the Burn Unit of the Hospital Garrahan during one year were included. Volume of blood component used during and immediately after surgery was analyzed and related to percent excised, time post-burn, and the coexistence of infection and autograft at the time of excision. Ninety-four surgeries in 51 children aged 0-14 years with total burned body surface areas of 5-80% who underwent resections of 3-70% were studied. Total blood use (intra + post-operatively) was 2.07 ml/kg/%excised for red blood cells (60% during surgery) and 0.7 ml/kg/% excised for plasma. Only 12% of patients required platelet transfusion. There was no significant requirement variation with the existence of infection, grafting or time post-burn. Approximately 2 ml/kg/% excised of red blood cells (2/3 for surgery) and 1 ml/kg/% excised of plasma are needed for escharectomies in children. The need for platelets must be judged considering the individual patient.
Chip-LC-MS for label-free profiling of human serum.
Horvatovich, Peter; Govorukhina, Natalia I; Reijmers, Theo H; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer
2007-12-01
The discovery of biomarkers in easily accessible body fluids such as serum is one of the most challenging topics in proteomics requiring highly efficient separation and detection methodologies. Here, we present the application of a microfluidics-based LC-MS system (chip-LC-MS) to the label-free profiling of immunodepleted, trypsin-digested serum in comparison to conventional capillary LC-MS (cap-LC-MS). Both systems proved to have a repeatability of approximately 20% RSD for peak area, all sample preparation steps included, while repeatability of the LC-MS part by itself was less than 10% RSD for the chip-LC-MS system. Importantly, the chip-LC-MS system had a two times higher resolution in the LC dimension and resulted in a lower average charge state of the tryptic peptide ions generated in the ESI interface when compared to cap-LC-MS while requiring approximately 30 times less (~5 pmol) sample. In order to characterize both systems for their capability to find discriminating peptides in trypsin-digested serum samples, five out of ten individually prepared, identical sera were spiked with horse heart cytochrome c. A comprehensive data processing methodology was applied including 2-D smoothing, resolution reduction, peak picking, time alignment, and matching of the individual peak lists to create an aligned peak matrix amenable for statistical analysis. Statistical analysis by supervised classification and variable selection showed that both LC-MS systems could discriminate the two sample groups. However, the chip-LC-MS system allowed to assign 55% of the overall signal to selected peaks against 32% for the cap-LC-MS system.
Comparison of universal approximators incorporating partial monotonicity by structure.
Minin, Alexey; Velikova, Marina; Lang, Bernhard; Daniels, Hennie
2010-05-01
Neural networks applied in control loops and safety-critical domains have to meet more requirements than just the overall best function approximation. On the one hand, a small approximation error is required; on the other hand, the smoothness and the monotonicity of selected input-output relations have to be guaranteed. Otherwise, the stability of most of the control laws is lost. In this article we compare two neural network-based approaches incorporating partial monotonicity by structure, namely the Monotonic Multi-Layer Perceptron (MONMLP) network and the Monotonic MIN-MAX (MONMM) network. We show the universal approximation capabilities of both types of network for partially monotone functions. On a number of datasets, we investigate the advantages and disadvantages of these approaches related to approximation performance, training of the model and convergence. 2009 Elsevier Ltd. All rights reserved.
Satellite test of the isotropy of the one-way spe ed of light using ExTRAS
NASA Technical Reports Server (NTRS)
Wolf, Peter
1995-01-01
A test of the second postulate of special relativity, the universality of the speed of light, using the ExTRAS (Experiment on Timing Ranging and Atmospheric Sounding) payload to be flown on board a Russian Meteor-3M satellite (launch date January 1997) is proposed. The propagation time of a light signal transmitted from one point to another without reflection would be measured directly by comparing the phases of two hydrogen maser clocks, one on board and one on the ground, using laser or microwave time transfer systems. An estimated uncertainty budget of the proposed measurements is given, resulting in an expected sensitivity of the experiment of delta c/c is less than 8xl0(exp -10) which would be an improvement by a factor of approximately 430 over previous direct measurements and by a factor of approximately 4 over the best indirect measurement. The proposed test would require no equipment additional to what is already planned and so is of inherently low-cost. It could be carried out by anyone having access to a laser or microwave ground station and a hydrogen maser.
Jagannathan, Sarangapani; He, Pingan
2008-12-01
In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.
Results of Ponseti Brasil Program: Multicentric Study in 1621 Feet: Preliminary Results.
Nogueira, Monica P; Queiroz, Ana C D B F; Melanda, Alessandro G; Tedesco, Ana P; Brandão, Antonio L G; Beling, Claudio; Violante, Francisco H; Brandão, Gilberto F; Ferreira, Laura F A; Brambila, Leandro S; Leite, Leopoldina M; Zabeu, Jose L; Kim, Jung H; Fernandes, Kalyana E; Arima, Marcia A S; Aguilar, Maria D P Q; Farias Filho, Orlando C D; Oliveira Filho, Oscar B D A; Pinho, Solange D S; Moulin, Paulo; Volpi, Reinaldo; Fox, Mark; Greenwald, Miles F; Lyle, Brandon; Morcuende, Jose A
The Ponseti method has been shown to be the most effective treatment for congenital clubfoot. The current challenge is to establish sustainable national clubfoot treatment programs that utilize the Ponseti method and integrate it within a nation's governmental health system. The Brazilian Ponseti Program (Programa Ponseti Brasil) has increased awareness of the utility of the Ponseti method and has trained >500 Brazilian orthopaedic surgeons in it. A group of 18 of those surgeons had been able to reproduce the Ponseti clubfoot treatment, and compiled their initial results through structured spreadsheet. The study compiled 1040 patients for a total of 1621 feet. The average follow-up time was 2.3 years with an average correction time of approximately 3 months. Patients required an average of 6.40 casts to achieve correction. This study demonstrates that good initial correction rates are reproducible after training; from 1040 patients only 1.4% required a posteromedial release. Level IV.
Lynch, T Sean; Bedi, Asheesh; Larson, Christopher M
2017-04-01
Historically, athletic hip injuries have garnered little attention; however, these injuries account for approximately 6% of all sports injuries and their prevalence is increasing. At times, the diagnosis and management of hip injuries can be challenging and elusive for the team physician. Hip injuries are seen in high-level athletes who participate in cutting and pivoting sports that require rapid acceleration and deceleration. Described previously as the "sports hip triad," these injuries consist of adductor strains, osteitis pubis, athletic pubalgia, or core muscle injury, often with underlying range-of-motion limitations secondary to femoroacetabular impingement. These disorders can happen in isolation but frequently occur in combination. To add to the diagnostic challenge, numerous intra-articular disorders and extra-articular soft-tissue restraints about the hip can serve as pain generators, in addition to referred pain from the lumbar spine, bowel, bladder, and reproductive organs. Athletic hip conditions can be debilitating and often require a timely diagnosis to provide appropriate intervention.
Modeling Soil Moisture in Support of the Revegetation of Military Lands in Arid Regions.
NASA Astrophysics Data System (ADS)
Caldwell, T. G.; McDonald, E. V.; Young, M. H.
2003-12-01
The National Training Center (NTC), the Army's primary mechanized maneuver training facility, covers approximately 2600 km2 within the Mojave Desert in southern California, and is the subject of ongoing studies to support the sustainability of military lands in desert environments. Revegetation of these lands by the Integrated Training Areas Management (ITAM) Program requires the identification of optimum growing conditions to reestablish desert vegetation from seed and seedling, especially with regard to the timing and abundance of plant-available water. Water content, soil water potential, and soil temperature were continuously monitored and used to calibrate the Simultaneous Heat And Water (SHAW) model at 3 re-seeded sites. Modeled irrigation scenarios were used to further evaluate the most effective volume, frequency, and timing of irrigation required to maximize revegetation success and minimize water use. Surface treatments including straw mulch, gravel mulch, soil tackifier and plastic sheet
Practical low-cost stereo head-mounted display
NASA Astrophysics Data System (ADS)
Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.
1991-08-01
A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opdyke, B.N.; Walker, J.C.G.
Differences in the rate of coral reef carbonate deposition from the Pleistocene to the Holocene may account for the Quaternary variation of atmospheric CO[sub 2]. Volumes of carbonate associated with Holocene reefs require an average deposition rate of 2.0 [times] 10[sup 13] mol/yr for the past 5 ka. In light of combined riverine, mid-ocean ridge, and ground-water fluxes of calcium to the oceans of 2.3 [times] 10[sub 13] mol/yr, the current flux of calcium carbonate to pelagic sediments must be far below the Pleistocene average of 1.2 [times] 10[sub 13] mol/yr. The authors suggest that sea-level change shifts the locusmore » of carbonate deposition from the deep sea to the shelves as the normal glacial-interglacial pattern of deposition of Quaternary global carbonates. To assess the impact of these changes on atmospheric CO[sub 2], a simple numerical simulation of the global carbon cycle was developed. Atmospheric CO[sub 2] as well as calcite saturation depth and sediment responses to these carbonate deposition changes are examined. Atmospheric CO[sub 2] changes close to those observed in the Vostok ice core, [approximately] 80 ppm CO[sub 2], for the Quaternary are observed as well as the approximate depth changes in percent carbonate of sediments measured in the Pacific Ocean over the same time interval.« less
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
NASA Technical Reports Server (NTRS)
Warming, Robert F.; Beam, Richard M.
1986-01-01
A hyperbolic initial-boundary-value problem can be approximated by a system of ordinary differential equations (ODEs) by replacing the spatial derivatives by finite-difference approximations. The resulting system of ODEs is called a semidiscrete approximation. A complication is the fact that more boundary conditions are required for the spatially discrete approximation than are specified for the partial differential equation. Consequently, additional numerical boundary conditions are required and improper treatment of these additional conditions can lead to instability. For a linear initial-boundary-value problem (IBVP) with homogeneous analytical boundary conditions, the semidiscrete approximation results in a system of ODEs of the form du/dt = Au whose solution can be written as u(t) = exp(At)u(O). Lax-Richtmyer stability requires that the matrix norm of exp(At) be uniformly bounded for O less than or = t less than or = T independent of the spatial mesh size. Although the classical Lax-Richtmyer stability definition involves a conventional vector norm, there is no known algebraic test for the uniform boundedness of the matrix norm of exp(At) for hyperbolic IBVPs. An alternative but more complicated stability definition is used in the theory developed by Gustafsson, Kreiss, and Sundstrom (GKS). The two methods are compared.
A Methodology for Writing High Quality Requirement Specifications and for Evaluating Existing Ones
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hammer, Theodore
1999-01-01
Requirements development and management have always been critical in the implementation of software systems-engineers are unable to build what analysts can not define. It is generally accepted that the earlier in the life cycle potential risks are identified the easier it is to eliminate or manage the conditions that introduce that risk. Problems that are not found until testing are approximately 14 times more costly to fix than if the problem was found in the requirement phase. The requirements specification, as the first tangible representation of the capability to be produced, establishes the basis for all of the project's engineering management and assurance functions. If the quality of the requirements specification is poor it can give rise to risks in all areas of the project. Recently, automated tools have become available to support requirements management. The use of these tools not only provides support in the definition and tracing of requirements, but it also opens the door to effective use of metrics in characterizing and assessing the quality of the requirement specifications.
A Methodology for Writing High Quality Requirements Specification and Evaluating Existing Ones
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hammer, Theodore
1999-01-01
Requirements development and management have always been critical in the implementation of software systems; engineers are unable to build what analysts can't define. It is generally accepted that the earlier in the life cycle potential risks are identified the easier it is to eliminate or manage the conditions that introduce that risk. Problems that are not found until testing are approximately 14 times more costly to fix than if the problem was found in the requirement phase. The requirements specification, as the first tangible representation of the capability to be produced, establishes the basis for all of the project's engineering management and assurance functions. If the quality of the requirements specification is poor it can give rise to risks in all areas of the project. Recently, automated tools have become available to support requirements management. The use of these tools not only provides support in the definition and tracing of requirements, but it also opens the door to effective use of metrics in characterizing and assessing the quality of the requirement specifications.
Alkaline thermal sludge hydrolysis.
Neyens, E; Baeyens, J; Creemers, C
2003-02-28
The waste activated sludge (WAS) treatment of wastewater produces excess sludge which needs further treatment prior to disposal or incineration. A reduction in the amount of excess sludge produced, and the increased dewaterability of the sludge are, therefore, subject of renewed attention and research. A lot of research covers the nature of the sludge solids and associated water. An improved dewaterability requires the disruption of the sludge cell structure. Previous investigations are reviewed in the paper. Thermal hydrolysis is recognized as having the best potential to meet the objectives and acid thermal hydrolysis is most frequently used, despite its serious drawbacks (corrosion, required post-neutralization, solubilization of heavy metals and phosphates, etc.). Alkaline thermal hydrolysis has been studied to a lesser extent, and is the subject of the detailed laboratory-scale research reported in this paper. After assessing the effect of monovalent/divalent cations (respectively, K(+)/Na(+) and Ca(2+)/Mg(2+)) on the sludge dewaterability, only the use of Ca(2+) appears to offer the best solution. The lesser effects of K(+), Na(+) and Mg(2+) confirm previous experimental findings. As a result of the experimental investigations, it can be concluded that alkaline thermal hydrolysis using Ca(OH)(2) is efficient in reducing the residual sludge amounts and in improving the dewaterability. The objectives are fully met at a temperature of 100 degrees C; at a pH approximately 10 and for a 60-min reaction time, where all pathogens are moreover killed. Under these optimum conditions, the rate of mechanical dewatering increases (the capillary suction time (CST) value is decreased from approximately 34s for the initial untreated sample to approximately 22s for the hydrolyzed sludge sample) and the amount of DS to be dewatered is reduced to approximately 60% of the initial untreated amount. The DS-content of the dewatered cake will be increased from 28 (untreated) to 46%.Finally, the mass and energy balances of a wastewater treatment plant with/without advanced sludge treatment (AST) are compared. The data clearly illustrate the benefits of using an alkaline AST-step in the system.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
Prah, Douglas; Ahunbay, Ergun; Li, X. Allen
2016-01-01
“Burst‐mode” modulated arc therapy (hereafter referred to as “mARC”) is a form of volumetric‐modulated arc therapy characterized by variable gantry rotation speed, static MLCs while the radiation beam is on, and MLC repositioning while the beam is off. We present our clinical experience with the planning techniques and plan quality assurance measurements of mARC delivery. Clinical mARC plans for five representative cases (prostate, low‐dose‐rate brain, brain with partial‐arc vertex fields, pancreas, and liver SBRT) were generated using a Monte Carlo–based treatment planning system. A conventional‐dose‐rate flat 6 MV and a high‐dose‐rate non‐flat 7 MV beam are available for planning and delivery. mARC plans for intact‐prostate cases can typically be created using one 360° arc, and treatment times per fraction seldom exceed 6 min using the flat beam; using the nonflat beam results in slightly higher MU per fraction, but also in delivery times less than 4 min and with reduced mean dose to distal organs at risk. mARC also has utility in low‐dose‐rate brain irradiation; mARC fields can be designed which deliver a uniform 20 cGy dose to the PTV in approximately 3‐minute intervals, making it a viable alternative to conventional 3D CRT. For brain cases using noncoplanar arcs, delivery time is approximately six min using the nonflat beam. For pancreas cases using the nonflat beam, two overlapping 360° arcs are required, and delivery times are approximately 10 min. For liver SBRT, the time to deliver 800 cGy per fraction is at least 12 min. Plan QA measurements indicate that the mARC delivery is consistent with the plan calculation for all cases. mARC has been incorporated into routine practice within our clinic; currently, on average approximately 15 patients per day are treated using mARC; and with the exception of LDR brain cases, all are treated using the nonflat beam. PACS number(s): 87.55.D‐, 87.55.K‐, 87.53.Ay. 87.56.N‐ PMID:27685123
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.
NASA Technical Reports Server (NTRS)
Borowski, Stanley K.; McCurdy, David R.; Packard, Thomas W.
2009-01-01
This paper summarizes Phase I and II analysis results from NASA's recent Mars DRA 5.0 study which re-examined mission, payload and transportation system requirements for a human Mars landing mission in the post-2030 timeframe. Nuclear thermal rocket (NTR) propulsion was again identified as the preferred in-space transportation system over chemical/aerobrake because of its higher specific impulse (I(sub sp)) capability, increased tolerance to payload mass growth and architecture changes, and lower total initial mass in low Earth orbit (IMLEO) which is important for reducing the number of Ares-V heavy lift launches and overall mission cost. DRA 5.0 features a long surface stay (approximately 500 days) split mission using separate cargo and crewed Mars transfer vehicles (MTVs). All vehicles utilize a common core propulsion stage with three 25 klbf composite fuel NERVA-derived NTR engines (T(sub ex) approximately 2650 - 2700 K, p(sub ch) approximately 1000 psia, epsilon approximately 300:1, I(sub sp) approximately 900 - 910 s, engine thrust-toweight ratio approximately 3.43) to perform all primary mission maneuvers. Two cargo flights, utilizing 1-way minimum energy trajectories, pre-deploy a cargo lander to the surface and a habitat lander into a 24-hour elliptical Mars parking orbit where it remains until the arrival of the crewed MTV during the next mission opportunity (approximately 26 months later). The cargo payload elements aerocapture (AC) into Mars orbit and are enclosed within a large triconicshaped aeroshell which functions as payload shroud during launch, then as an aerobrake and thermal protection system during Mars orbit capture and subsequent entry, descent and landing (EDL) on Mars. The all propulsive crewed MTV is a 0-gE vehicle design that utilizes a fast conjunction trajectory that allows approximately 6-7 month 1-way transit times to and from Mars. Four 12.5 kW(sub e) per 125 square meter rectangular photovoltaic arrays provide the crewed MTV with approximately 50 kW(sub e) of electrical power in Mars orbit for crew life support and spacecraft subsystem needs. Vehicle assembly involves autonomous Earth orbit rendezvous and docking between the propulsion stages, in-line propellant tanks and payload elements. Nine Ares-V launches -- five for the two cargo MTVs and four for the crewed MTV -- deliver the key components for the three MTVs. Details on mission, payload, engine and vehicle characteristics and requirements are presented and the results of key trade studies are discussed.
Garrido, Terhilda; Kumar, Sudheen; Lekas, John; Lindberg, Mark; Kadiyala, Dhanyaja; Whippy, Alan; Crawford, Barbara; Weissberg, Jed
2014-01-01
Using electronic health records (EHR) to automate publicly reported quality measures is receiving increasing attention and is one of the promises of EHR implementation. Kaiser Permanente has fully or partly automated six of 13 the joint commission measure sets. We describe our experience with automation and the resulting time savings: a reduction by approximately 50% of abstractor time required for one measure set alone (surgical care improvement project). However, our experience illustrates the gap between the current and desired states of automated public quality reporting, which has important implications for measure developers, accrediting entities, EHR vendors, public/private payers, and government. PMID:23831833
High-order cyclo-difference techniques: An alternative to finite differences
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Otto, John C.
1993-01-01
The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.
Mission activities planning for a Hermes mission by means of AI-technology
NASA Technical Reports Server (NTRS)
Pape, U.; Hajen, G.; Schielow, N.; Mitschdoerfer, P.; Allard, F.
1993-01-01
Mission Activities Planning is a complex task to be performed by mission control centers. AI technology can offer attractive solutions to the planning problem. This paper presents the use of a new AI-based Mission Planning System for crew activity planning. Based on a HERMES servicing mission to the COLUMBUS Man Tended Free Flyer (MTFF) with complex time and resource constraints, approximately 2000 activities with 50 different resources have been generated, processed, and planned with parametric variation of operationally sensitive parameters. The architecture, as well as the performance of the mission planning system, is discussed. An outlook to future planning scenarios, the requirements, and how a system like MARS can fulfill those requirements is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormier, V.F.; Kim, W.; Mandal, B.
A method for computing seismic wavefields in a high-frequency approximation is proposed based on the integration of the kinematic ray tracing equations and a new set of differential equations for the dynamic properties of the wavefront, which the authors call the vicinity ray tracing (VRT) equations. These equations are directly obtained from the Hamiltonian in ray centered coordinates, using no paraxial approximations. This system is comparable to the standard dynamic ray tracing (DRT) system, but it is specified by fewer equations (four versus eight in 3-D) and only requires the specification of velocity and its first spacial derivative along amore » ray. The VRT equations describe the trajectory of a ray in ray centered coordinates of a reference ray. Quantities obtained from vicinity ray tracing can be used to determine wavefront curvature, geometric spreading, travel time to a receiver near the reference ray, and the KMAH index of the reference ray with greater numerical precision than is possible by differencing kinematically traced rays. Since second spatial derivatives of velocity are not required by the new technique, parameterization of the medium is simplified, and reflection and transmission of beams can be calculated by applying Snell's law to both vicinity and central rays. Conversation relations between VRT and DRT can be used to determine the paraxial vicinity of DRT, in which the errors of the paraxial approximations of DRT remain small. Because no paraxial approximations are made, the superposition of the Gaussian beams define from the vicinity rays should exhibit a much slower breakdown in accuracy as the scale length of the medium given by V/Delta v approaches the beamwidth.« less
Signal to noise ratio of energy selective x-ray photon counting systems with pileup.
Alvarez, Robert E
2014-11-01
To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, "Near optimal energy selective x-ray imaging system performance with simple detectors," Med. Phys. 37, 822-841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr-Rao lower bound (CRLB) for larger counts is tested. The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated.
Signal to noise ratio of energy selective x-ray photon counting systems with pileup
Alvarez, Robert E.
2014-01-01
Purpose: To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. Methods: An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, “Near optimal energy selective x-ray imaging system performance with simple detectors,” Med. Phys. 37, 822–841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr–Rao lower bound (CRLB) for larger counts is tested. Results: The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. Conclusions: The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated. PMID:25370642
Raising the Gangdese Mountains in southern Tibet
NASA Astrophysics Data System (ADS)
Zhu, Di-Cheng; Wang, Qing; Cawood, Peter A.; Zhao, Zhi-Dan; Mo, Xuan-Xue
2017-01-01
The surface uplift of mountain belts is in large part controlled by the effects of crustal thickening and mantle dynamic processes (e.g., lithospheric delamination or slab breakoff). Understanding the history and driving mechanism of uplift of the southern Tibetan Plateau requires accurate knowledge on crustal thickening over time. Here we determine spatial and temporal variations in crustal thickness using whole-rock La/Yb ratios of intermediate intrusive rocks from the Gangdese arc. Our results show that the crust was likely of normal thickness prior to approximately 70 Ma ( 37 km) but began to thicken locally at approximately 70-60 Ma. The crust reached (58-50) ± 10 km at 55-45 Ma extending over 400 km along the strike of the arc. This thickening was likely due to magmatic underplating as a consequence of rollback and then breakoff of the subducting Neo-Tethyan slab. The crust attained a thickness of 68 ± 12 km at approximately 20-10 Ma, as a consequence of underthrusting of India and associated thrust faulting. The Gangdese Mountains in southern Tibet broadly attained an elevation of >4000 m at approximately 55-45 Ma as a result of isostatic surface uplift driven by crustal thickening and slab breakoff and reached their present-day elevation by 20-10 Ma. Our paleoelevation estimates are consistent not only with the C-O isotope-based paleoaltimetry but also with the carbonate-clumped isotope paleothermometer, exemplifying the promise of reconstructing paleoelevation in time and space for ancient orogens through a combination of magmatic composition and Airy isostatic compensation.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. P. Jensen; Toto, T.
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.« less
Ho, W C; Uniyal, S; Zhou, H; Morris, V L; Chan, B M C
2005-03-01
In a previous study, we show that stimulation of chemotaxis in rat pheochromocytoma PC12 cells by nerve growth factor (NGF) and epidermal growth factor (EGF) requires activation of the RAS-ERK signaling pathway. In this study, we compared the threshold levels of ERK activation required for EGF and NGF-stimulated chemotaxis in PC12 cells. The threshold ERK activity required for NGF to stimulate chemotaxis was approximately 30% lower than that for EGF. PD98059 treatment inhibited EGF stimulation of growth and chemotaxis; however, stimulation of chemotaxis required an EGF concentration approximately 10 times higher than for stimulation of PC12 cell growth. Thus, ERK-dependent cellular functions can be differentially elicited by the concentration of EGF. Also, treatment of PC12 cells with the PI3-K inhibitor LY294002 reduced ERK activation by NGF; thus, higher NGF concentrations were required to initiate chemotaxis and to achieve the same maximal chemotactic response seen in untreated PC12 cells. Therefore, the threshold NGF concentration to stimulate chemotaxis could be adjusted by the crosstalk between the ERK and PI3-K pathways, and the contributions of PI3-K and ERK to signal chemotaxis varied with the concentrations of NGF used. In comparison, LY294002 treatment had no effect on ERK activation by EGF, but the chemotactic response was reduced at all the concentrations of EGF tested indicating that NGF and EGF differed in the utilization of ERK and PI3-K to signal chemotaxis in PC12 cells.
Sedentary behaviours among Australian adolescents.
Hardy, Louise L; Dobbins, Timothy; Booth, Michael L; Denney-Wilson, Elizabeth; Okely, Anthony D
2006-12-01
To describe the prevalence and distribution (by demographic characteristics and body mass index [BMI] category) of sedentary behaviour among Australian adolescents aged 11-15 years. Cross-sectional representative population survey of school students (n = 2,750) in New South Wales, conducted in 2004. Students' self-reported time spent during a usual week in five categories of sedentary behaviour (small screen recreation [SSR], education, cultural, social and non-active travel). Height and weight were measured. Grade 6, 8 and 10 students spent approximately 34 hours, 41 hours and 45 hours/week of their discretionary time, respectively, engaged in sedentary behaviour. Urban students and students from Asian-speaking backgrounds spent significantly more time sedentary than students from rural areas or other cultural backgrounds. SSR accounted for 60% and 54% of sedentary behaviour among primary and high school students, respectively. Overweight and obese students spent more time in SSR than healthy weight students. Out-of-school hours educational activities accounted for approximately 20% of sedentary behaviour and increased with age. Girls spent twice the time in social activities compared with boys. Time spent in cultural activities declined with age. Sedentary behaviours among young people differ according to sex, age and cultural background. At least half of all time spent in sedentary behaviours was spent engaged in SSR. BMI was significantly associated with sedentary behaviour among some children, but not consistently across age groups. A clear understanding of young people's patterns of sedentary behaviour is required to develop effective and sustainable intervention programs to promote healthy living.
Strategic Methodologies in Public Health Cost Analyses.
Whittington, Melanie; Atherly, Adam; VanRaemdonck, Lisa; Lampe, Sarah
The National Research Agenda for Public Health Services and Systems Research states the need for research to determine the cost of delivering public health services in order to assist the public health system in communicating financial needs to decision makers, partners, and health reform leaders. The objective of this analysis is to compare 2 cost estimation methodologies, public health manager estimates of employee time spent and activity logs completed by public health workers, to understand to what degree manager surveys could be used in lieu of more time-consuming and burdensome activity logs. Employees recorded their time spent on communicable disease surveillance for a 2-week period using an activity log. Managers then estimated time spent by each employee on a manager survey. Robust and ordinary least squares regression was used to measure the agreement between the time estimated by the manager and the time recorded by the employee. The 2 outcomes for this study included time recorded by the employee on the activity log and time estimated by the manager on the manager survey. This study was conducted in local health departments in Colorado. Forty-one Colorado local health departments (82%) agreed to participate. Seven of the 8 models showed that managers underestimate their employees' time, especially for activities on which an employee spent little time. Manager surveys can best estimate time for time-intensive activities, such as total time spent on a core service or broad public health activity, and yet are less precise when estimating discrete activities. When Public Health Services and Systems Research researchers and health departments are conducting studies to determine the cost of public health services, there are many situations in which managers can closely approximate the time required and produce a relatively precise approximation of cost without as much time investment by practitioners.
MacDonald, Sharyn L S; Cowan, Ian A; Floyd, Richard A; Graham, Rob
2013-10-01
Accurate and transparent measurement and monitoring of radiologist workload is highly desirable for management of daily workflow in a radiology department, and for informing decisions on department staffing needs. It offers the potential for benchmarking between departments and assessing future national workforce and training requirements. We describe a technique for quantifying, with minimum subjectivity, all the work carried out by radiologists in a tertiary department. Six broad categories of clinical activities contributing to radiologist workload were identified: reporting, procedures, trainee supervision, clinical conferences and teaching, informal case discussions, and administration related to referral forms. Time required for reporting was measured using data from the radiology information system. Other activities were measured by observation and timing by observers, and based on these results and extensive consultation, the time requirements and frequency of each activity was agreed on. An activity list was created to record this information and to calculate the total clinical hours required to meet the demand for radiologist services. Diagnostic reporting accounted for approximately 35% of radiologist clinical time; procedures, 23%; trainee supervision, 15%; conferences and tutorials, 14%; informal case discussions, 10%; and referral-related administration, 3%. The derived data have been proven reliable for workload planning over the past 3 years. A transparent and robust method of measuring radiologists' workload has been developed, with subjective assessments kept to a minimum. The technique has value for daily workload and longer term planning. It could be adapted for widespread use. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
ERIC Educational Resources Information Center
Moffat, Alistair; And Others
1994-01-01
Describes an approximate document ranking process that uses a compact array of in-memory, low-precision approximations for document length. Combined with another rule for reducing the memory required by partial similarity accumulators, the approximation heuristic allows the ranking of large document collections using less than one byte of memory…
The role of adsorbed water on the friction of a layer of submicron particles
Sammis, Charles G.; Lockner, David A.; Reches, Ze’ev
2011-01-01
Anomalously low values of friction observed in layers of submicron particles deformed in simple shear at high slip velocities are explained as the consequence of a one nanometer thick layer of water adsorbed on the particles. The observed transition from normal friction with an apparent coefficient near μ = 0.6 at low slip speeds to a coefficient near μ = 0.3 at higher slip speeds is attributed to competition between the time required to extrude the water layer from between neighboring particles in a force chain and the average lifetime of the chain. At low slip speeds the time required for extrusion is less than the average lifetime of a chain so the particles make contact and lock. As slip speed increases, the average lifetime of a chain decreases until it is less than the extrusion time and the particles in a force chain never come into direct contact. If the adsorbed water layer enables the otherwise rough particles to rotate, the coefficient of friction will drop to μ = 0.3, appropriate for rotating spheres. At the highest slip speeds particle temperatures rise above 100°C, the water layer vaporizes, the particles contact and lock, and the coefficient of friction rises to μ = 0.6. The observed onset of weakening at slip speeds near 0.001 m/s is consistent with the measured viscosity of a 1 nm thick layer of adsorbed water, with a minimum particle radius of approximately 20 nm, and with reasonable assumptions about the distribution of force chains guided by experimental observation. The reduction of friction and the range of velocities over which it occurs decrease with increasing normal stress, as predicted by the model. Moreover, the analysis predicts that this high-speed weakening mechanism should operate only for particles with radii smaller than approximately 1 μm. For larger particles the slip speed required for weakening is so large that frictional heating will evaporate the adsorbed water and weakening will not occur.
Guidance, Navigation, and Control Performance for the GOES-R Spacecraft
NASA Technical Reports Server (NTRS)
Chapel, Jim; Stancliffe, Devin; Bevacqua, TIm; Winkler, Stephen; Clapp, Brian; Rood, Tim; Gaylor, David; Freesland, Doug; Krimchansky, Alexander
2014-01-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites. The series represents a dramatic increase in Earth observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands. GOES-R also provides unprecedented availability, with less than 120 minutes per year of lost observation time. This paper presents the Guidance Navigation & Control (GN&C) requirements necessary to realize the ambitious pointing, knowledge, and Image Navigation and Registration (INR) objectives of GOES-R. Because the suite of instruments is sensitive to disturbances over a broad spectral range, a high fidelity simulation of the vehicle has been created with modal content over 500 Hz to assess the pointing stability requirements. Simulation results are presented showing acceleration, shock response spectra (SRS), and line of sight (LOS) responses for various disturbances from 0 Hz to 512 Hz. Simulation results demonstrate excellent performance relative to the pointing and pointing stability requirements, with LOS jitter for the isolated instrument platform of approximately 1 micro-rad. Attitude and attitude rate knowledge are provided directly to the instrument with an accuracy defined by the Integrated Rate Error (IRE) requirements. The data are used internally for motion compensation. The final piece of the INR performance is orbit knowledge, which GOES-R achieves with GPS navigation. Performance results are shown demonstrating compliance with the 50 to 75 m orbit position accuracy requirements. As presented in this paper, the GN&C performance supports the challenging mission objectives of GOES-R.
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku
2007-10-01
A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.
Isotalo, Aarno E.; Wieselquist, William A.
2015-05-15
A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less
Fully decoupled monolithic projection method for natural convection problems
NASA Astrophysics Data System (ADS)
Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il
2017-04-01
To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.
NASA Astrophysics Data System (ADS)
Noah, Joyce E.
Time correlation functions of density fluctuations of liquids at equilibrium can be used to relate the microscopic dynamics of a liquid to its macroscopic transport properties. Time correlation functions are especially useful since they can be generated in a variety of ways, from scattering experiments to computer simulation to analytic theory. The kinetic theory of fluctuations in equilibrium liquids is an analytic theory for calculating correlation functions using memory functions. In this work, we use a diagrammatic formulation of the kinetic theory to develop a series of binary collision approximations for the collisional part of the memory function. We define binary collisions as collisions between two distinct density fluctuations whose identities are fixed during the duration of a collsion. R approximations are for the short time part of the memory function, and build upon the work of Ranganathan and Andersen. These approximations have purely repulsive interactions between the fluctuations. The second type of approximation, RA approximations, is for the longer time part of the memory function, where the density fluctuations now interact via repulsive and attractive forces. Although RA approximations are a natural extension of R approximations, they permit two density fluctuations to become trapped in the wells of the interaction potential, leading to long-lived oscillatory behavior, which is unphysical. Therefore we consider S approximations which describe binary particles which experience the random effect of the surroundings while interacting via repulsive or repulsive and attractive interactions. For each of these approximations for the memory function we numerically solve the kinetic equation to generate correlation functions. These results are compared to molecular dynamics results for the correlation functions. Comparing the successes and failures of the different approximations, we conclude that R approximations give more accurate intermediate and long time results while RA and S approximations do particularly well at predicting the short time behavior. Lastly, we also develop a series of non-graphically derived approximations and use an optimization procedure to determine the underlying memory function from the simulation data. These approaches provide valuable information about the memory function that will be used in the development of future kinetic theories.
Utilising shade to optimize UV exposure for vitamin D
NASA Astrophysics Data System (ADS)
Turnbull, D. J.; Parisi, A. V.
2008-01-01
Numerous studies have stated that humans need to utilise full sun radiation, at certain times of the day, to assist the body in synthesising the required levels of vitamin D3. The time needed to be spent in the full sun depends on a number of factors, for example, age, skin type, latitude, solar zenith angle. Current Australian guidelines suggest exposure to approximately 1/6 to 1/3 of a minimum erythemal dose (MED), depending on age, would be appropriate to provide adequate vitamin D3 levels. The aim of the study was to determine the exposure times to diffuse solar UV to receive exposures of 1/6 and 1/3 MED for a changing solar zenith angle in order to assess the possible role that diffuse UV (scattered radiation) may play in vitamin D3 effective UV exposures (UVD3). Diffuse and global erythemal UV measurements were conducted at five minute intervals over a twelve month period for a solar zenith angle range of 4° to 80° at a latitude of 27.6° S. For diffuse UV exposures of 1/6 and 1/3 MED, solar zenith angles smaller than 60° and 50° respectively can be utilised for exposure times of less than 10 min. Spectral measurements showed that, for a solar zenith angle of 40°, the UVA (315-400 nm) in the diffuse component of the solar UV is reduced by approximately 62% compared to the UVA in the global UV, whereas UVD3 wavelengths are only reduced by approximately 43%. At certain latitudes, diffuse UV under shade may play an important role in providing the human body with adequate levels of UVD3 (290-330 nm) radiation without experiencing the high levels of damaging UVA observed in full sun.
Passive radiation shielding considerations for the proposed space elevator
NASA Astrophysics Data System (ADS)
Jorgensen, A. M.; Patamia, S. E.; Gassend, B.
2007-02-01
The Earth's natural van Allen radiation belts present a serious hazard to space travel in general, and to travel on the space elevator in particular. The average radiation level is sufficiently high that it can cause radiation sickness, and perhaps death, for humans spending more than a brief period of time in the belts without shielding. The exact dose and the level of the related hazard depends on the type or radiation, the intensity of the radiation, the length of exposure, and on any shielding introduced. For the space elevator the radiation concern is particularly critical since it passes through the most intense regions of the radiation belts. The only humans who have ever traveled through the radiation belts have been the Apollo astronauts. They received radiation doses up to approximately 1 rem over a time interval less than an hour. A vehicle climbing the space elevator travels approximately 200 times slower than the moon rockets did, which would result in an extremely high dose up to approximately 200 rem under similar conditions, in a timespan of a few days. Technological systems on the space elevator, which spend prolonged periods of time in the radiation belts, may also be affected by the high radiation levels. In this paper we will give an overview of the radiation belts in terms relevant to space elevator studies. We will then compute the expected radiation doses, and evaluate the required level of shielding. We concentrate on passive shielding using aluminum, but also look briefly at active shielding using magnetic fields. We also look at the effect of moving the space elevator anchor point and increasing the speed of the climber. Each of these mitigation mechanisms will result in a performance decrease, cost increase, and technical complications for the space elevator.
Forbes, Thomas P.; Staymates, Matthew
2017-01-01
Venturi-assisted ENTrainment and Ionization (VENTI) was developed, demonstrating efficient entrainment, collection, and transport of remotely sampled vapors, aerosols, and dust particulate for real-time mass spectrometry (MS) detection. Integrating the Venturi and Coandă effects at multiple locations generated flow and analyte transport from non-proximate locations and more importantly enhanced the aerodynamic reach at the point of collection. Transport through remote sampling probes up to 2.5 m in length was achieved with residence times on the order of 10-2 s to 10-1 s and Reynolds numbers on the order of 103 to 104. The Venturi-assisted entrainment successfully enhanced vapor collection and detection by greater than an order of magnitude at 20 cm stand-off (limit of simple suction). This enhancement is imperative, as simple suction restricts sampling to the immediate vicinity, requiring close proximity to the vapor source. In addition, the overall aerodynamic reach distance was increased by approximately 3-fold over simple suction under the investigated conditions. Enhanced aerodynamic reach was corroborated and observed with laser-light sheet flow visualization and schlieren imaging. Coupled with atmospheric pressure chemical ionization (APCI), the detection of a range of volatile chemical vapors; explosive vapors; explosive, narcotic, and mustard gas surrogate (methyl salicylate) aerosols; and explosive dust particulate was demonstrated. Continuous real-time Venturi-assisted monitoring of a large room (approximately 90 m2 area, 570 m3 volume) was demonstrated for a 60-minute period without the remote sampling probe, exhibiting detection of chemical vapors and methyl salicylate at approximately 3 m stand-off distances within 2 minutes of exposure. PMID:28107830
Forbes, Thomas P; Staymates, Matthew
2017-03-08
Venturi-assisted ENTrainment and Ionization (VENTI) was developed, demonstrating efficient entrainment, collection, and transport of remotely sampled vapors, aerosols, and dust particulate for real-time mass spectrometry (MS) detection. Integrating the Venturi and Coandă effects at multiple locations generated flow and analyte transport from non-proximate locations and more importantly enhanced the aerodynamic reach at the point of collection. Transport through remote sampling probes up to 2.5 m in length was achieved with residence times on the order of 10 -2 s to 10 -1 s and Reynolds numbers on the order of 10 3 to 10 4 . The Venturi-assisted entrainment successfully enhanced vapor collection and detection by greater than an order of magnitude at 20 cm stand-off (limit of simple suction). This enhancement is imperative, as simple suction restricts sampling to the immediate vicinity, requiring close proximity to the vapor source. In addition, the overall aerodynamic reach distance was increased by approximately 3-fold over simple suction under the investigated conditions. Enhanced aerodynamic reach was corroborated and observed with laser-light sheet flow visualization and schlieren imaging. Coupled with atmospheric pressure chemical ionization (APCI), the detection of a range of volatile chemical vapors; explosive vapors; explosive, narcotic, and mustard gas surrogate (methyl salicylate) aerosols; and explosive dust particulate was demonstrated. Continuous real-time Venturi-assisted monitoring of a large room (approximately 90 m 2 area, 570 m 3 volume) was demonstrated for a 60-min period without the remote sampling probe, exhibiting detection of chemical vapors and methyl salicylate at approximately 3 m stand-off distances within 2 min of exposure. Published by Elsevier B.V.
Abbasi, U M; Chand, F; Bhanger, M I; Memon, S A
1986-02-01
A simple and rapid method is described for the direct thermometric determination of milligram amounts of methyl dopa, propranolol hydrochloride, 1-phenyl-3-methylpyrazolone (MPP) and 2,3-dimethyl-1-phenylpyrazol-5-one (phenazone) in the presence of excipients. The compounds are reacted with N'-bromosuccinimide and the heat of reaction is used to determine the end-point of the titration. The time required is approximately 2 min, and the accuracy is analytically acceptable.
Parallel algorithm for computation of second-order sequential best rotations
NASA Astrophysics Data System (ADS)
Redif, Soydan; Kasap, Server
2013-12-01
Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.
Investigation of geomagnetic field forecasting and fluid dynamics of the core
NASA Technical Reports Server (NTRS)
Benton, E. R. (Principal Investigator)
1981-01-01
The magnetic determination of the depth of the core-mantle boundary using MAGSAT data is discussed. Refinements to the approach of using the pole-strength of Earth to evaluate the radius of the Earth's core-mantle boundary are reported. The downward extrapolation through the electrically conducting mantle was reviewed. Estimates of an upper bound for the time required for Earth's liquid core to overturn completely are presented. High order analytic approximations to the unsigned magnetic flux crossing the Earth's surface are also presented.
Budinich, M
1996-02-15
Unsupervised learning applied to an unstructured neural network can give approximate solutions to the traveling salesman problem. For 50 cities in the plane this algorithm performs like the elastic net of Durbin and Willshaw (1987) and it improves when increasing the number of cities to get better than simulated annealing for problems with more than 500 cities. In all the tests this algorithm requires a fraction of the time taken by simulated annealing.
NASA Technical Reports Server (NTRS)
Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank
1992-01-01
A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.
Statistical Attitude Determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2010-01-01
All spacecraft require attitude determination at some level of accuracy. This can be a very coarse requirement of tens of degrees, in order to point solar arrays at the sun, or a very fine requirement in the milliarcsecond range, as required by Hubble Space Telescope. A toolbox of attitude determination methods, applicable across this wide range, has been developed over the years. There have been many advances in the thirty years since the publication of Reference, but the fundamentals remain the same. One significant change is that onboard attitude determination has largely superseded ground-based attitude determination, due to the greatly increased power of onboard computers. The availability of relatively inexpensive radiation-hardened microprocessors has led to the development of "smart" sensors, with autonomous star trackers being the first spacecraft application. Another new development is attitude determination using interferometry of radio signals from the Global Positioning System (GPS) constellation. This article reviews both the classic material and these newer developments at approximately the level of, with emphasis on. methods suitable for use onboard a spacecraft. We discuss both "single frame" methods that are based on measurements taken at a single point in time, and sequential methods that use information about spacecraft dynamics to combine the information from a time series of measurements.
NASA Astrophysics Data System (ADS)
Chan, Kwai H.; Lau, Rynson W.
1996-09-01
Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.
A 16X16 Discrete Cosine Transform Chip
NASA Astrophysics Data System (ADS)
Sun, M. T.; Chen, T. C.; Gottlieb, A.; Wu, L.; Liou, M. L.
1987-10-01
Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Flexible scheme to truncate the hierarchy of pure states.
Zhang, P-P; Bentley, C D B; Eisfeld, A
2018-04-07
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Flexible scheme to truncate the hierarchy of pure states
NASA Astrophysics Data System (ADS)
Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.
2018-04-01
The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.
Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu
2015-09-21
Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.
NASA Technical Reports Server (NTRS)
Ransome, Peter D.
1988-01-01
A digital satellite beacon receiver is described which provides measurement information down to a carrier/noise density ratio approximately 15 dB below that required by a conventional (phase locked loop) design. When the beacon signal fades, accuracy degrades gracefully, and is restored immediately (without hysteresis) on signal recovery, even if the signal has faded into the noise. Benefits of the digital processing approach used include the minimization of operator adjustments, stability of the phase measuring circuits with time, repeatability between units, and compatibility with equipment not specifically designed for propagation measuring. The receiver has been developed for the European Olympus satellite which has continuous wave (CW) beacons at 12.5 and 29.7 GHz, and a switched polarization beacon at 19.8 GHz approximately, but the system can be reconfigured for CW and polarization-switched beacons at other frequencies.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
[Rapid 3-Dimensional Models of Cerebral Aneurysm for Emergency Surgical Clipping].
Konno, Takehiko; Mashiko, Toshihiro; Oguma, Hirofumi; Kaneko, Naoki; Otani, Keisuke; Watanabe, Eiju
2016-08-01
We developed a method for manufacturing solid models of cerebral aneurysms, with a shorter printing time than that involved in conventional methods, using a compact 3D printer with acrylonitrile-butadiene-styrene(ABS)resin. We further investigated the application and utility of this printing system in emergency clipping surgery. A total of 16 patients diagnosed with acute subarachnoid hemorrhage resulting from cerebral aneurysm rupture were enrolled in the present study. Emergency clipping was performed on the day of hospitalization. Digital Imaging and Communication in Medicine(DICOM)data obtained from computed tomography angiography(CTA)scans were edited and converted to stereolithography(STL)file formats, followed by the production of 3D models of the cerebral aneurysm by using the 3D printer. The mean time from hospitalization to the commencement of surgery was 242 min, whereas the mean time required for manufacturing the 3D model was 67 min. The average cost of each 3D model was 194 Japanese Yen. The time required for manufacturing the 3D models shortened to approximately 1 hour with increasing experience of producing 3D models. Favorable impressions for the use of the 3D models in clipping were reported by almost all neurosurgeons included in this study. Although 3D printing is often considered to involve huge costs and long manufacturing time, the method used in the present study requires shorter time and lower costs than conventional methods for manufacturing 3D cerebral aneurysm models, thus making it suitable for use in emergency clipping.
Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...
2016-05-20
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less
A Fast Code for Jupiter Atmospheric Entry
NASA Technical Reports Server (NTRS)
Tauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq; Arnold, James (Technical Monitor)
1998-01-01
A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry; the calculation required 3.5 sec of CPU time on a work station. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 12.5% too high. The forebody's mass loss was overpredicted by 5.5% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was considered satisfactory, especially in view of the code's fast running time and the methods' approximations.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-09-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
NASA Technical Reports Server (NTRS)
Justice, Charles; Mason, Norm; Taggart, Doug
1994-01-01
As of 1 Oct. 1993, the US Coast Guard (USCG) supports and operates fifteen Loran-C chains. With the introduction of the Global Positioning Systems (GPS) and the termination of the Department of Defense (DOD) overseas need for Loran-C, the USCG will cease operating the three remaining overseas chains by 31 Dec. 1994. Following this date, the USCG Loran-C system will consist of twelve chains. Since 1971, management of time synchronization of the Loran-C system has been conducted under a Memorandum of Agreement between the US Naval Observatory (USNO) and the USCG. The requirement to maintain synchronization with Coordinated Universal Time (UTC) was initially specified as +/- 25 microseconds. This tolerance was rapidly lowered to +/- 2.5 microseconds in 1974. To manage this synchronization requirement, the USCG incorporated administrative practices which kept the USNO appraised of all aspects of the master timing path. This included procedures for responding to timing path failures, timing adjustments, and time steps. Conducting these aspects of time synchronization depended on message traffic between the various master stations and the USNO. To determine clock adjustment the USCG relied upon the USNO's Series 4 and 100 updates so that the characteristics of the master clock could be plotted and controls appropriately applied. In 1987, Public Law 100-223, under the Airport and Airway Improvement Act Amendment, reduced the synchronization tolerance to approximately 100 nanoseconds for chains serving the National Airspace System (NAS). This action caused changes in the previous administrative procedures and techniques. The actions taken by the USCG to meet the requirements of this law are presented.
Spectral methods for time dependent problems
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1990-01-01
Spectral approximations are reviewed for time dependent problems. Some basic ingredients from the spectral Fourier and Chebyshev approximations theory are discussed. A brief survey was made of hyperbolic and parabolic time dependent problems which are dealt with by both the energy method and the related Fourier analysis. The ideas presented above are combined in the study of accuracy stability and convergence of the spectral Fourier approximation to time dependent problems.
A comparison of polynomial approximations and artificial neural nets as response surfaces
NASA Technical Reports Server (NTRS)
Carpenter, William C.; Barthelemy, Jean-Francois M.
1992-01-01
Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net, and the number of designs needed to train an approximation is discussed.
A quantum relaxation-time approximation for finite fermion systems
NASA Astrophysics Data System (ADS)
Reinhard, P.-G.; Suraud, E.
2015-03-01
We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Bovino, S; Zhang, P; Kharchenko, V; Dalgarno, A
2011-07-14
In this paper, we report our investigation of the translational energy relaxation of fast S((1)D) atoms in a Xe thermal bath. The interaction potential of Xe-S was constructed using ab initio methods. Total and differential cross sections were then calculated. The latter have been incorporated into the construction of the kernel of the Boltzmann equation describing the energy relaxation process. The solution of the Boltzmann equation was obtained and results were compared with those reported in experiments [G. Nan, and P. L. Houston, J. Chem. Phys. 97, 7865 (1992)]. Good agreement with the measured time-dependent relative velocity of fast S((1)D) atoms was obtained except at long relaxation times. The discrepancy may be due to the error accumulation caused by the use of hard sphere approximation and the Monte Carlo analysis of the experimental data. Our accurate description of the energy relaxation process led to an increase in the number of collisions required to achieve equilibrium by an order of magnitude compared to the number given by the hard-sphere approximation.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
An efficient and robust method for predicting helicopter rotor high-speed impulsive noise
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
A new formulation for the Ffowcs Williams-Hawkings quadrupole source, which is valid for a far-field in-plane observer, is presented. The far-field approximation is new and unique in that no further approximation of the quadrupole source strength is made and integrands with r(exp -2) and r(exp -3) dependence are retained. This paper focuses on the development of a retarded-time formulation in which time derivatives are analytically taken inside the integrals to avoid unnecessary computational work when the observer moves with the rotor. The new quadrupole formulation is similar to Farassat's thickness and loading formulation 1A. Quadrupole noise prediction is carried out in two parts: a preprocessing stage in which the previously computed flow field is integrated in the direction normal to the rotor disk, and a noise computation stage in which quadrupole surface integrals are evaluated for a particular observer position. Preliminary predictions for hover and forward flight agree well with experimental data. The method is robust and requires computer resources comparable to thickness and loading noise prediction.
Cole, Brian; Lei, Jonathan; DiLazaro, Tom; Schilling, Bradley; Goldberg, Lew
2009-11-01
Optical triggering via direct bleaching of a Cr:YAG saturable absorber was applied to a monolithic Nd:YAG/Cr:YAG laser crystal. The method uses a single laser diode bar to bleach a thin sheet within the saturable absorber from a direction orthogonal to the lasing axis. By placing the Q-switch at the time corresponding to the steepest slope (dT/dt) for change in transmission during bleaching, the pulse-to-pulse timing jitter showed a 13.2x reduction in standard deviation, from 132 ns for free-running operation to 10 ns with optical triggering. We measured that a fluence of 60 kW/cm(2) was sufficient to enable optical triggering, where a diode appropriately sized for the length of the Cr:YAG (approximately 3 mm) would then require only approximately 150 W of optical power over a 1-2 micros duration to enable effective jitter reduction. Additionally, we measured an increase in optical-to-optical efficiency with optical triggering, where the efficiency improved from 12% to 13.5%.
Alkylation effects on the energy transfer of highly vibrationally excited naphthalene.
Hsu, Hsu Chen; Tsai, Ming-Tsang; Dyakov, Yuri A; Ni, Chi-Kung
2011-11-04
The energy transfer of highly vibrationally excited isomers of dimethylnaphthalene and 2-ethylnaphthalene in collisions with krypton were investigated using crossed molecular beam/time-of-flight mass spectrometer/time-sliced velocity map ion imaging techniques at a collision energy of approximately 300 cm(-1). Angular-resolved energy-transfer distribution functions were obtained directly from the images of inelastic scattering. The results show that alkyl-substituted naphthalenes transfer more vibrational energy to translational energy than unsubstituted naphthalene. Alkylation enhances the V→T energy transfer in the range -ΔE(d)=-100~-1500 cm(-1) by approximately a factor of 2. However, the maximum values of V→T energy transfer for alkyl-substituted naphthalenes are about 1500~2000 cm(-1), which is similar to that of naphthalene. The lack of rotation-like wide-angle motion of the aromatic ring and no enhancement in very large V→T energy transfer, like supercollisions, indicates that very large V→T energy transfer requires special vibrational motions. This transfer cannot be achieved by the low-frequency vibrational motions of alkyl groups. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mello, Cesar; Ribeiro, Diórginis; Novaes, Fábio; Poppi, Ronei J
2005-10-01
Use of classical microbiological methods to differentiate bacteria that cause gastroenteritis is cumbersome but usually very efficient. The high cost of reagents and the time required for such identifications, approximately four days, could have serious consequences, however, mainly when the patients are children, the elderly, or adults with low resistance. The search for new methods enabling rapid and reagentless differentiation of these microorganisms is, therefore, extremely relevant. In this work the main microorganisms responsible for gastroenteritis, Escherichia coli, Salmonella choleraesuis, and Shigella flexneri, were studied. For each microorganism sixty different dispersions were prepared in physiological solution. The Raman spectra of these dispersions were recorded using a diode laser operating in the near infrared region. Partial least-squares (PLS) discriminant analysis was used to differentiate among the bacteria by use of their respective Raman spectra. This approach enabled correct classification of 100% of the bacteria evaluated and unknown samples from the clinical environment, in less time ( approximately 10 h), by use of a low-cost, portable Raman spectrometer, which can be easily used in intensive care units and clinical environments.
The measured temperature and pressure of EDC37 detonation products
NASA Astrophysics Data System (ADS)
Ferguson, J. W.; Richley, J. C.; Sutton, B. D.; Price, E.; Ota, T. A.
2017-01-01
We present the experimentally determined temperature and pressure of the detonation products of EDC37; a HMX based conventional high explosive. These measurements were performed on a series of cylinder tests. The temperature measurements were undertaken at the end of the cylinder with optical fibres observing the bare explosive through a LiF window. The temperature of the products was measured for approximately 2 µs using single colour pyrometry, multicolour pyrometry and also using time integrated optical emission spectroscopy with the results from all three methods being broadly consistent. The peak temperature was found to be ≈ 3600 K dropping to ≈ 2400 K at the end of the measurement window. The spectroscopy was time integrated and showed that the emission spectra can be approximated using a grey body curve between 520 - 800 nm with no emission or absorption lines being observed. The pressure was obtained using an analytical method which requires the velocity of the expanding cylinder wall and the velocity of detonation. The pressure drops from an initial CJ value of ≈ 38 GPa to ≈ 4 GPa after 2 µs.
Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.
Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M
2011-02-01
To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
ADHD and math - The differential effect on calculation and estimation.
Ganor-Stern, Dana; Steinhorn, Ofir
2018-05-31
Adults with ADHD were compared to controls when solving multiplication problems exactly and when estimating the results of multidigit multiplication problems relative to reference numbers. The ADHD participants were slower than controls in the exact calculation and in the estimation tasks, but not less accurate. The ADHD participants were similar to controls in showing enhanced accuracy and speed for smaller problem sizes, for trials in which the reference numbers were smaller (vs. larger) than the exact answers and for reference numbers that were far (vs. close) from the exact answer. The two groups similarly used the approximated calculation and the sense of magnitude strategies. They differed however in strategy execution, mainly of the approximated calculation strategy, which requires working memory resources. The increase in reaction time associated with using the approximated calculation strategy was larger for the ADHD compared to the control participants. Thus, ADHD seems to selectively impair calculation processes in estimation tasks that rely on working memory, but it does not hamper estimation skills that are based on sense of magnitude. The educational implications of these findings are discussed. Copyright © 2018. Published by Elsevier B.V.
Guidance of a Solar Sail Spacecraft to the Sun - L(2) Point.
NASA Astrophysics Data System (ADS)
Hur, Sun Hae
The guidance of a solar sail spacecraft along a minimum-time path from an Earth orbit to a region near the Sun-Earth L_2 libration point is investigated. Possible missions to this point include a spacecraft "listening" for possible extra-terrestrial electromagnetic signals and a science payload to study the geomagnetic tail. A key advantage of the solar sail is that it requires no fuel. The control variables are the sail angles relative to the Sun-Earth line. The thrust is very small, on the order of 1 mm/s^2, and its magnitude and direction are highly coupled. Despite this limited controllability, the "free" thrust can be used for a wide variety of terminal conditions including halo orbits. If the Moon's mass is lumped with the Earth, there are quasi-equilibrium points near L_2. However, they are unstable so that some form of station keeping is required, and the sail can provide this without any fuel usage. In the two-dimensional case, regulating about a nominal orbit is shown to require less control and result in smaller amplitude error response than regulating about a quasi-equilibrium point. In the three-dimensional halo orbit case, station keeping using periodically varying gains is demonstrated. To compute the minimum-time path, the trajectory is divided into two segments: the spiral segment and the transition segment. The spiral segment is computed using a control law that maximizes the rate of energy increase at each time. The transition segment is computed as the solution of the time-optimal control problem from the endpoint of the spiral to the terminal point. It is shown that the path resulting from this approximate strategy is very close to the exact optimal path. For the guidance problem, the approximate strategy in the spiral segment already gives a nonlinear full-state feedback law. However, for large perturbations, follower guidance using an auxiliary propulsion is used for part of the spiral. In the transition segment, neighboring extremal feedback guidance using the solar sail, with feedforward control only near the terminal point, is used to correct perturbations in the initial conditions.
Su, Alvin W; McIntosh, Amy L; Schueler, Beth A; Milbrandt, Todd A; Winkler, Jennifer A; Stans, Anthony A; Larson, A Noelle
Intraoperative C-arm fluoroscopy and low-dose O-arm are both reasonable means to assist in screw placement for idiopathic scoliosis surgery. Both using pediatric low-dose O-arm settings and minimizing the number of radiographs during C-arm fluoroscopy guidance decrease patient radiation exposure and its deleterious biological effect that may be associated with cancer risk. We hypothesized that the radiation dose for C-arm-guided fluoroscopy is no less than low-dose O-arm scanning for placement of pedicle screws. A multicenter matched-control cohort study of 28 patients in total was conducted. Fourteen patients who underwent O-arm-guided pedicle screw insertion for spinal fusion surgery in 1 institution were matched to another 14 patients who underwent C-arm fluoroscopy guidance in the other institution in terms of the age of surgery, body weight, and number of imaged spine levels. The total effective dose was compared. A low-dose pediatric protocol was used for all O-arm scans with an effective dose of 0.65 mSv per scan. The effective dose of C-arm fluoroscopy was determined using anthropomorphic phantoms that represented the thoracic and lumbar spine in anteroposterior and lateral views, respectively. The clinical outcome and complications of all patients were documented. The mean total effective dose for the O-arm group was approximately 4 times higher than that of the C-arm group (P<0.0001). The effective dose for the C-arm patients had high variability based on fluoroscopy time and did not correlate with the number of imaged spine levels or body weight. The effective dose of 1 low-dose pediatric O-arm scan approximated 85 seconds of the C-arm fluoroscopy time. All patients had satisfactory clinical outcomes without major complications that required returning to the operating room. Radiation exposure required for O-arm scans can be higher than that required for C-arm fluoroscopy, but it depends on fluoroscopy time. Inclusion of more medical centers and surgeons will better account for the variability of C-arm dose due to distinct patient characteristics, surgeon's preference, and individual institution's protocol. Level III-case-control study.
Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria
ERIC Educational Resources Information Center
Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.
2004-01-01
The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.
Fast accretion of the Earth with a late Moon-forming giant impact
Yu, Gang; Jacobsen, Stein B.
2011-01-01
Constraints on the formation history of the Earth are critical for understanding of planet formation processes. 182Hf-182W chronometry of terrestrial rocks points to accretion of Earth in approximately 30 Myr after the formation of the solar system, immediately followed by the Moon-forming giant impact (MGI). Nevertheless, some N-body simulations and 182Hf-182W and 87Rb-87Sr chronology of some lunar rocks have been used to argue for a later formation of the Moon at 52 to > 100 Myr. This discrepancy is often explained by metal-silicate disequilibrium during giant impacts. Here we describe a model of the 182W isotopic evolution of the accreting Earth, including constraints from partitioning of refractory siderophile elements (Ni, Co, W, V, and Nb) during core formation, which can explain the discrepancy. Our modeling shows that the concentrations of the siderophile elements of the mantle are consistent with high-pressure metal-silicate equilibration in a terrestrial magma ocean. Our analysis shows that the timing of the MGI is inversely correlated with the time scale of the main accretion stage of the Earth. Specifically, the earliest time the MGI could have taken place right at approximately 30 Myr, corresponds to the end of main-stage accretion at approximately 30 Myr. A late MGI (> 52 Myr) requires the main stage of the Earth’s accretion to be completed rapidly in < 10.7 ± 2.5 Myr. These are the two end member solutions and a continuum of solutions exists in between these extremes. PMID:22006299
Fast accretion of the earth with a late moon-forming giant impact.
Yu, Gang; Jacobsen, Stein B
2011-10-25
Constraints on the formation history of the Earth are critical for understanding of planet formation processes. (182)Hf-(182)W chronometry of terrestrial rocks points to accretion of Earth in approximately 30 Myr after the formation of the solar system, immediately followed by the Moon-forming giant impact (MGI). Nevertheless, some N-body simulations and (182)Hf-(182)W and (87)Rb-(87)Sr chronology of some lunar rocks have been used to argue for a later formation of the Moon at 52 to > 100 Myr. This discrepancy is often explained by metal-silicate disequilibrium during giant impacts. Here we describe a model of the (182)W isotopic evolution of the accreting Earth, including constraints from partitioning of refractory siderophile elements (Ni, Co, W, V, and Nb) during core formation, which can explain the discrepancy. Our modeling shows that the concentrations of the siderophile elements of the mantle are consistent with high-pressure metal-silicate equilibration in a terrestrial magma ocean. Our analysis shows that the timing of the MGI is inversely correlated with the time scale of the main accretion stage of the Earth. Specifically, the earliest time the MGI could have taken place right at approximately 30 Myr, corresponds to the end of main-stage accretion at approximately 30 Myr. A late MGI (> 52 Myr) requires the main stage of the Earth's accretion to be completed rapidly in < 10.7 ± 2.5 Myr. These are the two end member solutions and a continuum of solutions exists in between these extremes.
The Use of a Pseudo Noise Code for DIAL Lidar
NASA Technical Reports Server (NTRS)
Burris, John F.
2010-01-01
Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.
Tether-Cutting Energetics of a Solar Quiet Region Prominence Eruption
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, Ronald L.
2003-01-01
We study the morphology and energetics of a slowly evolving quiet-region solar prominence eruption occurring on 1999 February 8-9 in the solar north polar crown region, using soft X-ray data from the soft X-ray telescope (SXT) on Yohkoh and Fexv EUV 284 Angstrom data from the EUV Imaging Telescope (EIT) on the Solar and Heliospheric Observatory (SOHO). After rising at approximately equal to l kilometer per second for about six hours, the prominence accelerates to a velocity of approximately equal to 10 kilometers per second, leaving behind EUV and soft X-ray loop arcades of a weak flare in its source region. Intensity dimmings occur in the eruption region cospatially in EUV and soft X-rays, indicating that the dimmings result from a depletion of material. Over the first two hours of the prominences rapid rise, flare-like brightenings occur beneath the rising prominence that might correspond to tether-cutting magnetic reconnection. These brightenings have heating requirements of up to approximately 10(exp 28)-10(exp 29) ergs, and this is comparable to the mechanical energy required for the rising prominence over the same time period. If the ratio of mechanical energy to heating energy remains constant through the early phase of the eruption, then we infer that coronal signatures for the tether cutting may not be apparent at or shortly after the start of the fast phase in this or similar low-energy eruptions, since the plasma-heating energy levels would not exceed that of the background corona.
NASA Technical Reports Server (NTRS)
Fukazawa, Yasushi; Ohashi, Takaya; Fabian, Andrew C.; Canizares, Claude R.; Ikebe, Yasushi; Makishima, Kazuo; Mushotzky, Richard F.; Yamashita, Koujun
1994-01-01
Spatially resolved energy spectra in the energy range 0.5-10 keV have been measured for the Centaurus cluster of galaxies with Advanced Satellite for Cosmology and Astrophysics (ASCA). Within 10 min (200 kpc) from the cluster center, the helium-like iron K emission line exhibits a dramatic increase toward the center rising from an equivalent width approximately 500 eV to approximately 1500 eV corresponding to an abundance change from 0.3 to 1.0 solar. The presence of strong iron L lines indicates an additional cool component (kT approximately 1 keV) within 10 min from the center. The cool component requires absorption in excess of the galactic value and this excess absorption increases towards the central region of the cluster. In the surrounding region with radius greater than 10 min, the spectra are well described by a single temperature thermal model with kT approximately 4 keV and spatially uniform abundances at about 0.3-0.4 times solar. The detection of metal-rich hot and cool gas in the cluster center implies a complex nature of the central cluster gas which is likely to be related to the presence of the central cD galaxy NGC 4696.
Investigating the two-moment characterisation of subcellular biochemical networks.
Ullah, Mukhtar; Wolkenhauer, Olaf
2009-10-07
While ordinary differential equations (ODEs) form the conceptual framework for modelling many cellular processes, specific situations demand stochastic models to capture the influence of noise. The most common formulation of stochastic models for biochemical networks is the chemical master equation (CME). While stochastic simulations are a practical way to realise the CME, analytical approximations offer more insight into the influence of noise. Towards that end, the two-moment approximation (2MA) is a promising addition to the established analytical approaches including the chemical Langevin equation (CLE) and the related linear noise approximation (LNA). The 2MA approach directly tracks the mean and (co)variance which are coupled in general. This coupling is not obvious in CME and CLE and ignored by LNA and conventional ODE models. We extend previous derivations of 2MA by allowing (a) non-elementary reactions and (b) relative concentrations. Often, several elementary reactions are approximated by a single step. Furthermore, practical situations often require the use of relative concentrations. We investigate the applicability of the 2MA approach to the well-established fission yeast cell cycle model. Our analytical model reproduces the clustering of cycle times observed in experiments. This is explained through multiple resettings of M-phase promoting factor (MPF), caused by the coupling between mean and (co)variance, near the G2/M transition.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions
NASA Technical Reports Server (NTRS)
Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.;
2008-01-01
Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.
Hadron mass and decays constant predictions of the valence approximation to lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weingarten, D.
1993-05-01
A key goal of the lattice formulation of QCD is to reproduce the masses and decay constants of the low-lying baryons and mesons. Lattice QCD mass and decay constant predictions for the real world are supposed to be obtained from masses and decay constants calculated with finite lattice spacing and finite lattice volume by taking the limits of zero spacing and infinite volume. In addition, since the algorithms used for hadron mass and decay constant calculations become progressively slower for small quark masses, results are presently found with quark masses much larger than the expected values of the up andmore » down quark masses. Predictions for the properties of hadrons containing up and down quarks then require a further extrapolation to small quark masses. The author reports here mass and decay constant predictions combining all three extrapolations for Wilson quarks in the valence (quenched) approximation. This approximation may be viewed as replacing the momentum and frequency dependent color dielectric constant arising from quark-antiquark vacuum polarization with its zero-momentum, zero-frequency limit. These calculations used approximately one year of machine time on the GF11 parallel computer running at a sustained rate of between 5 and 7 Gflops.« less
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Earthquake models using rate and state friction and fast multipoles
NASA Astrophysics Data System (ADS)
Tullis, T.
2003-04-01
The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.
Effect of design selection on response surface performance
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1993-01-01
Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net and the number of designs needed to train an approximation is discussed.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Cheng, Kung-Shan; Dewhirst, Mark W; Stauffer, Paul R; Das, Shiva
2010-03-01
This paper investigates overall theoretical requirements for reducing the times required for the iterative learning of a real-time image-guided adaptive control routine for multiple-source heat applicators, as used in hyperthermia and thermal ablative therapy for cancer. Methods for partial reconstruction of the physical system with and without model reduction to find solutions within a clinically practical timeframe were analyzed. A mathematical analysis based on the Fredholm alternative theorem (FAT) was used to compactly analyze the existence and uniqueness of the optimal heating vector under two fundamental situations: (1) noiseless partial reconstruction and (2) noisy partial reconstruction. These results were coupled with a method for further acceleration of the solution using virtual source (VS) model reduction. The matrix approximation theorem (MAT) was used to choose the optimal vectors spanning the reduced-order subspace to reduce the time for system reconstruction and to determine the associated approximation error. Numerical simulations of the adaptive control of hyperthermia using VS were also performed to test the predictions derived from the theoretical analysis. A thigh sarcoma patient model surrounded by a ten-antenna phased-array applicator was retained for this purpose. The impacts of the convective cooling from blood flow and the presence of sudden increase of perfusion in muscle and tumor were also simulated. By FAT, partial system reconstruction directly conducted in the full space of the physical variables such as phases and magnitudes of the heat sources cannot guarantee reconstructing the optimal system to determine the global optimal setting of the heat sources. A remedy for this limitation is to conduct the partial reconstruction within a reduced-order subspace spanned by the first few maximum eigenvectors of the true system matrix. By MAT, this VS subspace is the optimal one when the goal is to maximize the average tumor temperature. When more than 6 sources present, the steps required for a nonlinear learning scheme is theoretically fewer than that of a linear one, however, finite number of iterative corrections is necessary for a single learning step of a nonlinear algorithm. Thus, the actual computational workload for a nonlinear algorithm is not necessarily less than that required by a linear algorithm. Based on the analysis presented herein, obtaining a unique global optimal heating vector for a multiple-source applicator within the constraints of real-time clinical hyperthermia treatments and thermal ablative therapies appears attainable using partial reconstruction with minimum norm least-squares method with supplemental equations. One way to supplement equations is the inclusion of a method of model reduction.
To flap or not to flap: a discussion between a fish and a jellyfish
NASA Astrophysics Data System (ADS)
Martin, Nathan; Roh, Chris; Idrees, Suhail; Gharib, Morteza
2016-11-01
Fish and jellyfish are known to swim by flapping and by periodically contracting respectively, but which is the more effective propulsion mechanism? In an attempt to answer this question, an experimental comparison is made between simplified versions of these motions to determine which generates the greatest thrust for the least power. The flapping motion is approximated by pitching plates while periodic contractions are approximated by clapping plates. A machine is constructed to operate in either a flapping or a clapping mode between Reynolds numbers 1,880 and 11,260 based on the average plate tip velocity and span. The effect of the total sweep angle, total sweep time, plate flexibility, and duty cycle are investigated. The average thrust generated and power required per cycle are compared between the two modes when their total sweep angle and total sweep time are identical. In general, operating in the clapping mode required significantly more power to generate a similar thrust compared to the flapping mode. However, modifying the duty cycle for clapping caused the effectiveness to approach that of flapping with an unmodified duty cycle. These results suggest that flapping is the more effective propulsion mechanism within the range of Reynolds numbers tested. This work was supported by the Charyk Bio-inspired Laboratory at the California Institute of Technology, the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144469, and the Summer Undergraduate Research Fellowships program.
Evaluation of nonlinear structural dynamic responses using a fast-running spring-mass formulation
NASA Astrophysics Data System (ADS)
Benjamin, A. S.; Altman, B. S.; Gruda, J. D.
In today's world, accurate finite-element simulations of large nonlinear systems may require meshes composed of hundreds of thousands of degrees of freedom. Even with today's fast computers and the promise of ever-faster ones in the future, central processing unit (CPU) expenditures for such problems could be measured in days. Many contemporary engineering problems, such as those found in risk assessment, probabilistic structural analysis, and structural design optimization, cannot tolerate the cost or turnaround time for such CPU-intensive analyses, because these applications require a large number of cases to be run with different inputs. For many risk assessment applications, analysts would prefer running times to be measurable in minutes. There is therefore a need for approximation methods which can solve such problems far more efficiently than the very detailed methods and yet maintain an acceptable degree of accuracy. For this purpose, we have been working on two methods of approximation: neural networks and spring-mass models. This paper presents our work and results to date for spring-mass modeling and analysis, since we are further along in this area than in the neural network formulation. It describes the physical and numerical models contained in a code we developed called STRESS, which stands for 'Spring-mass Transient Response Evaluation for structural Systems'. The paper also presents results for a demonstration problem, and compares these with results obtained for the same problem using PRONTO3D, a state-of-the-art finite element code which was also developed at Sandia.
RighTime: A real time clock correcting program for MS-DOS-based computer systems
NASA Technical Reports Server (NTRS)
Becker, G. Thomas
1993-01-01
A computer program is described which effectively eliminates the misgivings of the DOS system clock in PC/AT-class computers. RighTime is a small, sophisticated memory-resident program that automatically corrects both the DOS system clock and the hardware 'CMOS' real time clock (RTC) in real time. RighTime learns what corrections are required without operator interaction beyond the occasional accurate time set. Both warm (power on) and cool (power off) errors are corrected, usually yielding better than one part per million accuracy in the typical desktop computer with no additional hardware, and RighTime increases the system clock resolution from approximately 0.0549 second to 0.01 second. Program tools are also available which allow visualization of RighTime's actions, verification of its performance, display of its history log, and which provide data for graphing of the system clock behavior. The program has found application in a wide variety of industries, including astronomy, satellite tracking, communications, broadcasting, transportation, public utilities, manufacturing, medicine, and the military.
Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards
2013-01-01
Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less
The health status of asylum seekers screened by Auckland Public Health in 1999 and 2000.
Hobbs, Mark; Moor, Catherine; Wansbrough, Tony; Calder, Lester
2002-08-23
Approximately 1500 to 1800 applications for refugee status are made to the New Zealand Immigration Service each year. Approximately one third of these asylum seekers receive health screening from Auckland Public Health. We report here key findings from this screening programme for the period 1999 to 2000. The files of patients attending the Auckland Public Health Protection Asylum Seekers Screening Clinic at Green Lane Hospital were reviewed. Data on demographics, medical examination, diagnostic testing and referrals were analysed. Nine hundred people, mainly from Middle Eastern countries, received screening. Important findings were: symptoms of psychological illness (38.4%); Mantoux skin test positivity ( 36.4%); active tuberculosis (0.6%); TB infection requiring chemoprophylaxis (18%) or chest X-ray monitoring (15%); gut parasite infection; carrier state for alpha and beta thalassaemia and the heterozygous states for HbS and HbE; incomplete immunisation; and the need for referral to a secondary care service (32.6%). Immigrant communities in New Zealand have special healthcare needs, as well as experiencing language barriers, cultural differences and economic difficulties. Healthcare providers should be alert to these needs. Appropriate resources are required to address these issues in a timely fashion.
Discussion-preliminary review of the safety aspects of the crossunder line, Project CG-884. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, S.S.
1960-12-19
In order to reduce both charge-discharge shutdown time and the number of manhours of radiation exposure, Project CGI-884 is being completed at the B, D, DR, F and R Reactors. This consists essentially of installing a large drain line at the bottom of one rear reactor riser. This drain line passes to a control valve and then to the effluent line beyond the downcomer. This system by-passes the crossover downcomer part of the effluent system and eliminates the need for intermittent rear crossheader valving during reactor charge-discharge procedures. Two aspects of this system have been considered, its basic design requirements,more » and operating restrictions to ensure adequate process tube cooling. Because of the complexity of the reactor flow system approximate solutions were used to compare different methods or degrees of operation and establish limits. Despite these approximations, there was sufficient difference in the case results to justify the specific conclusions presented in this report. This report should serve the dual purpose of providing design requirements for the crossunder and also providing the technical criteria necessary for the operating standards for the use of this new system.« less
NASA Astrophysics Data System (ADS)
Tavousi, A.; Mansouri-Birjandi, M. A.
2018-02-01
Implementing intensity-dependent Kerr-like nonlinearity in octagonal-shape photonic crystal ring resonators (OSPCRRs), a new class of optical analog-to-digital converters (ADCs) with low power consumption is presented. Due to its size dependent refractive index, Silicon (Si) nanocrystal is used as nonlinear medium in the proposed ADC. Coding system of optical ADC is based on successive-like approximations which requires only one quantization level to represent each single bit, despite of conventional ADCs that require at least two distinct levels for each bit. Each is representing bit of optical ADC is formed by vertically alignment of double rings of OSPCRRs (DR-OSPCRR) and cascading m number of DR-OSPCRR, forms an m bit ADC. Investigating different parameters of DR-OSPCRR such as refractive indices of rings, lattice refractive index, and coupling coefficients of waveguide-to-ring and ring-to-ring, the ADC's threshold power is tuned. Increasing the number of bits of ADC, increases the overall power consumption of ADC. One can arrange to have any number of bits for this ADC, as long as the power levels are treated carefully. Finite difference time domain (FDTD) in-house codes were used to evaluate the ADC's effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALLAWAY WS; HUBER HJ
Based on an ENRAF waste surface measurement taken February 1, 2009, double-shell tank (DST) 241-AN-106 (AN-106) contained approximately 278.98 inches (793 kgal) of waste. A zip cord measurement from the tank on February 1, 2009, indicated a settled solids layer of 91.7 inches in height (280 kgal). The supernatant layer in February 2009, by difference, was approximately 187 inches deep (514 kgal). Laboratory results from AN-106 February 1, 2009 (see Table 2) grab samples indicated the supernatant was below the chemistry limit that applied at the time as identified in HNF-SD-WM-TSR-006, Tank Farms Technical Safety Requirements, Administrative Control (AC) 5.16,more » 'Corrosion Mitigation Controls.' (The limits have since been removed from the Technical Safety Requirements (TSR) and are captured in OSD-T-151-00007, Operating Specifications for the Double-Shell Storage Tanks.) Problem evaluation request WRPS-PER-2009-0218 was submitted February 9, 2009, to document the finding that the supernatant chemistry for grab samples taken from the middle and upper regions of the supernatant was noncompliant with the chemistry control limits. The lab results for the samples taken from the bottom region of the supernatant met AC 5.16 limits.« less
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Test techniques for determining laser ranging system performance
NASA Technical Reports Server (NTRS)
Zagwodzki, T. W.
1981-01-01
Procedures and results of an on going test program intended to evaluate laser ranging system performance levels in the field as well as in the laboratory are summarized. Tests show that laser ranging system design requires consideration of time biases and RMS jitters of individual system components. All simple Q switched lasers tested were found to be inadequate for 10 centimeter ranging systems. Timing discriminators operating over a typical 100:1 dynamic signal range may introduce as much as 7 to 9 centimeters of range bias. Time interval units commercially available today are capable of half centimeter performance and are adequate for all field systems currently deployed. Photomultipliers tested show typical tube time biases of one centimeter with single photoelectron transit time jitter of approximately 10 centimeters. Test results demonstrate that NASA's Mobile Laser Ranging System (MOBLAS) receiver configuration is limiting system performance below the 100 photoelectron level.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberg, S.E.
1998-05-01
When certain organometallic compounds are photoexcited in room temperature alkane solution, they are able to break or activate the C-H bonds of the solvent. Understanding this potentially practical reaction requires a detailed knowledge of the entire reaction mechanism. Because of the dynamic nature of chemical reactions, time-resolved spectroscopy is commonly employed to follow the important events that take place as reactants are converted to products. For the organometallic reactions examined here, the electronic/structural characteristics of the chemical systems along with the time scales for the key steps in the reaction make ultrafast UV/Vis and IR spectroscopy along with nanosecond Step-Scanmore » FTIR spectroscopy the ideal techniques to use for this study. An initial study of the photophysics of (non-activating) model metal carbonyls centering on the photodissociation of M(CO){sub 6} (M = Cr, W, Mo) was carried out in alkane solutions using ultrafast IR spectroscopy. Next, picosecond UV/vis studies of the C-H bond activation reaction of Cp{sup *}M(CO){sub 2} (M = Rh, Ir), conducted in room temperature alkane solution, are described in an effort to investigate the origin of the low quantum yield for bond cleavage ({approximately}1%). To monitor the chemistry that takes place in the reaction after CO is lost, a system with higher quantum yield is required. The reaction of Tp{sup *}Rh(CO){sub 2} (Tp{sup *} = HB-Pz{sub 3}{sup *}, Pz{sup *} = 3,5-dimethylpyrazolyl) in alkanes has a quantum yield of {approximately}30%, making time resolved spectroscopic measurements possible. From ultrafast IR experiments, two subsequently formed intermediates were observed. The nature of these intermediates are discussed and the first comprehensive reaction mechanism for a photochemical C-H activating organometallic complex is presented.« less
About recent star formation rates inferences
NASA Astrophysics Data System (ADS)
Cerviño, M.; Bongiovanni, A.; Hidalgo, S.
2017-03-01
Star Formation Rate (SFR) inferences are based in the so-called constant SFR approximation, where synthesis models are require to provide a calibration; we aims to study the key points of such approximation to produce accurate SFR inferences. We use the intrinsic algebra used in synthesis models, and we explore how SFR can be inferred from the integrated light without any assumption about the underling Star Formation history (SFH). We show that the constant SFR approximation is actually a simplified expression of more deeper characteristics of synthesis models: It is a characterization of the evolution of single stellar populations (SSPs), acting the SSPs as sensitivity curve over different measures of the SFH can be obtained. As results, we find that (1) the best age to calibrate SFR indices is the age of the observed system (i.e. about 13 Gyr for z = 0 systems); (2) constant SFR and steady-state luminosities are not requirements to calibrate the SFR ; (3) it is not possible to define a SFR single time scale over which the recent SFH is averaged, and we suggest to use typical SFR indices (ionizing flux, UV fluxes) together with no typical ones (optical/IR fluxes) to correct the SFR from the contribution of the old component of the SFH, we show how to use galaxy colors to quote age ranges where the recent component of the SFH is stronger/softer than the older component. Particular values of SFR calibrations are (almost) not affect by this work, but the meaning of what is obtained by SFR inferences does. In our framework, results as the correlation of SFR time scales with galaxy colors, or the sensitivity of different SFR indices to sort and long scale variations in the SFH, fit naturally. In addition, the present framework provides a theoretical guideline to optimize the available information from data/numerical experiments to improve the accuracy of SFR inferences. More info en Cerviño, Bongiovanni & Hidalgo A&A 588, 108C (2016)
A top-down approach for approximate data anonymisation
NASA Astrophysics Data System (ADS)
Li, JianQiang; Yang, Ji-Jiang; Zhao, Yu; Liu, Bo
2013-08-01
Data sharing in today's information society poses a threat to individual privacy and organisational confidentiality. k-anonymity is a widely adopted model to prevent the owner of a record being re-identified. By generalising and/or suppressing certain portions of the released dataset, it guarantees that no records can be uniquely distinguished from at least other k-1 records. A key requirement for the k-anonymity problem is to minimise the information loss resulting from data modifications. This article proposes a top-down approach to solve this problem. It first considers each record as a vertex and the similarity between two records as the edge weight to construct a complete weighted graph. Then, an edge cutting algorithm is designed to divide the complete graph into multiple trees/components. The Large Components with size bigger than 2k-1 are subsequently split to guarantee that each resulting component has the vertex number between k and 2k-1. Finally, the generalisation operation is applied on the vertices in each component (i.e. equivalence class) to make sure all the records inside have identical quasi-identifier values. We prove that the proposed approach has polynomial running time and theoretical performance guarantee O(k). The empirical experiments show that our approach results in substantial improvements over the baseline heuristic algorithms, as well as the bottom-up approach with the same approximate bound O(k). Comparing to the baseline bottom-up O(logk)-approximation algorithm, when the required k is smaller than 50, the adopted top-down strategy makes our approach achieve similar performance in terms of information loss while spending much less computing time. It demonstrates that our approach would be a best choice for the k-anonymity problem when both the data utility and runtime need to be considered, especially when k is set to certain value smaller than 50 and the record set is big enough to make the runtime have to be taken into account.
Ares V: Shifting the Payload Design Paradigm
NASA Technical Reports Server (NTRS)
Sumrall, Phil; Creech, Steve; Cockrell, Charles E.
2009-01-01
NASA is designing the Ares V heavy-lift cargo launch vehicle to send more crew and cargo to more places on the lunar surface than the 1960s-era Saturn V and to provide ongoing support for a permanent lunar outpost. This uncrewed cargo vehicle is designed to operate together with the Ares I crew vehicle (Figure 1). In addition to this role, however, its unmatched mass and volume capability represent a national asset for exploration, science, and commerce. The Ares V also enables or significantly enhances a large class of space missions not thought possible by scientists and engineers since the Saturn V program ended over 30 years ago. Compared to current systems, it will offer approximately five times the mass and volume to most orbits and locations. This should allow prospective mission planners to build robust payloads with margins that are three to five times the industry norm. The space inside the planned payload shroud has enough usable volume to launch the volumetric equivalent of approximately 10 Apollo Lunar Modules or approximately five equivalent Hubble Space Telescopes. This mass and volume capability to low-Earth orbit (LEO) enables a host of new scientific and observation platforms, such as telescopes, satellites, planetary and solar missions, as well as being able to provide the lift for future large in-space infrastructure missions, such as space based solar power and mining, Earth asteroid defense, propellant depots, etc. In addition, payload designers may also have the option of simplifying their designs or employing Ares V s payload as dumb mass to reduce technical and operational risk. The Ares V team is engaging the potential payload community now, two to three years before System Requirements Review (SRR), in order to better understand the additional requirements from the payload community that could be accommodated in the Ares V design in its conceptual phase. This paper will discuss the Ares V reference mission and capability, as well as its potential to perform other missions in the future.
Simulation studies of carbon nanotube field-effect transistors
NASA Astrophysics Data System (ADS)
John, David Llewellyn
Simulation studies of carbon nanotube field-effect transistors (CNFETs) are presented using models of increasing rigour and versatility that have been systematically developed. Firstly, it is demonstrated how one may compute the standard tight-binding band structure. From this foundation, a self-consistent solution for computing the equilibrium energy band diagram of devices with Schottky-barrier source and drain contacts is developed. While this does provide insight into the likely behaviour of CNFETs, a non-equilibrium model is required in order to predict the current-voltage relation. To this end, the effective-mass approximation is utilized, where a parabolic fit to the band structure is used in order to develop a Schrodinger-Poisson solver. This model is employed to predict both DC behaviour and switching times for CNFETs, and was one of the first models that captured quantum effects, such as tunneling and resonance, in these devices. In addition, this model has been used in order to validate compact models that incorporated tunneling via the WKB approximation. A modified WKB derivation is provided in order to account for the non-zero reflection of carriers above a potential energy step. In order to allow for greater flexibility in the CNFET geometries, and to lift the effective-mass approximation, a non-equilibrium Green's function method is finally developed, which uses an atomistic tight-binding Hamiltonian to model doped-contact, as opposed to Schottky-barrier-contact, devices. This approach benefits by being able to account for both inter- and intra-band tunneling, and by utilizing a quadratic matrix equation in order to improve the computation time for the required self-energy matrices. Within this technique, an expression for the local inter-atomic current is derived in order to provide more detailed information than the usual compact expression for the terminal current. With this final model, an investigation is presented into the effects of geometrical variations, contact thicknesses, and azimuthal variation in the surface potential of the nanotube.
NASA Astrophysics Data System (ADS)
Pawar, V.; Weaver, C.; Jani, S.
2011-05-01
Zirconium and particularly Zr-2.5 wt%Nb (Zr2.5Nb) alloy are useful for engineering bearing applications because they can be oxidized in air to form a hard surface ceramic. Oxidized zirconium (OxZr) due to its abrasion resistant ceramic surface and biocompatible substrate alloy has been used as a bearing surface in total joint arthroplasty for several years. OxZr is characterized by hard zirconium oxide (oxide) formed on Zr2.5Nb using one step thermal oxidation carried out in air. Because the oxide is only at the surface, the bulk material behaves like a metal, with high toughness. The oxide, furthermore, exhibits high adhesion to the substrate because of an oxygen-rich diffusion hardened zone (DHZ) interposing between the oxide and the substrate. In this study, we demonstrate a two step process that forms a thicker DHZ and thus increased depth of hardening than that can be obtained using a one step oxidation process. The first step is thermal oxidation in air and the second step is a heat treatment in vacuum. The second step drives oxygen from the oxide formed in the first step deeper into the substrate to form a thicker DHZ. During the process only a portion of the oxide is dissolved. This new composition (DHOxZr) has approximately 4-6 μm oxide similar to that of OxZr. The nano-hardness of the oxide is similar but the DHZ is approximately 10 times thicker. The stoichiometry of the oxide is similar and a secondary phase rich in oxygen is present through the entire thickness. Due to the increased depth of hardening, the critical load required for the onset of oxide cracking is approximately 1.6 times more than that of the oxide of OxZr. This new composition has a potential to be used as a bearing surface in applications where greater depth of hardening is required.
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1999-01-01
Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing. In order for large scale optimization to become routine, the benefits of parallel architectures should be exploited. Although the flow solver has been parallelized using compiler directives. The parallel efficiency is under 50 percent. Clearly, parallel versions of the codes will have an immediate impact on the ability to design realistic configurations on fine meshes, and this effort is currently underway.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-10
...: Tissue Adhesive With Adjunct Wound Closure Device Intended for the Topical Approximation of Skin... Document: Tissue Adhesive with Adjunct Wound Closure Device Intended for the Topical Approximation of Skin... intended for the topical approximation of skin may comply with the requirement of special controls for...
NASA Technical Reports Server (NTRS)
Mckhann, G.
1977-01-01
Solar array power systems for the space construction base are discussed. Nickel cadmium and nickel hydrogen batteries are equally attractive relative to regenerative fuel cell systems at 5 years life. Further evaluation of energy storage system life (low orbit conditions) is required. Shuttle and solid polymer electrolyte fuel cell technology appears adequate; large units (approximately four times shuttle) are most appropriate and should be studied for a 100 KWe SCB system. A conservative NiH2 battery DOD (18.6%) was elected due to lack of test data and offers considerable improvement potential. Multiorbit load averaging and reserve capacity requirements limit nominal DOD to 30% to 50% maximum, independent of life considerations.
Continuous control of chaos based on the stability criterion.
Yu, Hong Jie; Liu, Yan Zhu; Peng, Jian Hua
2004-06-01
A method of chaos control based on stability criterion is proposed in the present paper. This method can stabilize chaotic systems onto a desired periodic orbit by a small time-continuous perturbation nonlinear feedback. This method does not require linearization of the system around the stabilized orbit and only an approximate location of the desired periodic orbit is required which can be automatically detected in the control process. The control can be started at any moment by choosing appropriate perturbation restriction condition. It seems that more flexibility and convenience are the main advantages of this method. The discussions on control of attitude motion of a spacecraft, Rössler system, and two coupled Duffing oscillators are given as numerical examples.
Adaptive control based on retrospective cost optimization
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)
2012-01-01
A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Yang, Yu-Chiao; Wei, Ming-Chi
2018-06-30
This study compared the use of ultrasound-assisted supercritical CO 2 (USC-CO 2 ) extraction to obtain apigenin-rich extracts from Scutellaria barbata D. Don with that of conventional supercritical CO 2 (SC-CO 2 ) extraction and heat-reflux extraction (HRE), conducted in parallel. This green procedure yielded 20.1% and 31.6% more apigenin than conventional SC-CO 2 extraction and HRE, respectively. Moreover, the extraction time required by the USC-CO 2 procedure, which used milder conditions, was approximately 1.9 times and 2.4 times shorter than that required by conventional SC-CO 2 extraction and HRE, respectively. Furthermore, the theoretical solubility of apigenin in the supercritical fluid system was obtained from the USC-CO 2 dynamic extraction curves and was in good agreement with the calculated values for the three empirical density-based models. The second-order kinetics model was further applied to evaluate the kinetics of USC-CO 2 extraction. The results demonstrated that the selected model allowed the evaluation of the extraction rate and extent of USC-CO 2 extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.
An electrophysiological study of the mental rotation of polygons.
Pierret, A; Peronnet, F; Thevenet, M
1994-05-09
Reaction times and event-related potentials (ERPs) were recorded during a task requiring subjects to decide whether two sequentially presented polygons had the same shape regardless of differences in orientation. Reaction times increased approximately linearly with angular departure from upright orientation, which suggests that mental rotation was involved in the comparison process. The ERPs showed, between 665 and 1055 ms, a late posterior negativity also increasing with angular disparity from upright, which we assumed to reflect mental rotation. Two other activities were exhibited, from 265 to 665 ms, which may be related either to an evaluation of the stimulus or a predetermination of its orientation, and from 1055 to 1600 ms attributed to the decision process.
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voyles, Jimmy
Individual datastreams from instrumentation at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility fixed and mobile research observatories (sites) are collected and routed to the ARM Data Center (ADC). The Data Management Facility (DMF), a component of the ADC, executes datastream processing in near-real time. Processed data are then delivered approximately daily to the ARM Data Archive, also a component of the ADC, where they are made freely available to the research community. For each instrument, ARM calculates the ratio of the actual number of processed data records received daily at the ARM Data Archivemore » to the expected number of data records. DOE requires national user facilities to report time-based operating data.« less
Wurden, G.A.
1999-01-19
Radiation-hard, steady-state imaging bolometer is disclosed. A bolometer employing infrared (IR) imaging of a segmented-matrix absorber of plasma radiation in a cooled-pinhole camera geometry is described. The bolometer design parameters are determined by modeling the temperature of the foils from which the absorbing matrix is fabricated by using a two-dimensional time-dependent solution of the heat conduction equation. The resulting design will give a steady-state bolometry capability, with approximately 100 Hz time resolution, while simultaneously providing hundreds of channels of spatial information. No wiring harnesses will be required, as the temperature-rise data will be measured via an IR camera. The resulting spatial data may be used to tomographically investigate the profile of plasmas. 2 figs.
Wurden, Glen A.
1999-01-01
Radiation-hard, steady-state imaging bolometer. A bolometer employing infrared (IR) imaging of a segmented-matrix absorber of plasma radiation in a cooled-pinhole camera geometry is described. The bolometer design parameters are determined by modeling the temperature of the foils from which the absorbing matrix is fabricated by using a two-dimensional time-dependent solution of the heat conduction equation. The resulting design will give a steady-state bolometry capability, with approximately 100 Hz time resolution, while simultaneously providing hundreds of channels of spatial information. No wiring harnesses will be required, as the temperature-rise data will be measured via an IR camera. The resulting spatial data may be used to tomographically investigate the profile of plasmas.
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Knox, C. E.
1983-01-01
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.
Threats to information security of real-time disease surveillance systems.
Henriksen, Eva; Johansen, Monika A; Baardsgaard, Anders; Bellika, Johan G
2009-01-01
This paper presents the main results from a qualitative risk assessment of information security aspects for a new real-time disease surveillance approach in general, and for the Snow surveillance system in particular. All possible security threats and acceptable solutions, and the implications these solutions had to the design of the system, were discussed. Approximately 30 threats were identified. None of these got an unacceptable high risk level originally, but two got medium risk level, of which one was concluded to be unacceptable after further investigation. Of the remaining low risk threats, some have severe consequence, thus requiring particular assessment. Since it is very important to identify and solve all security threats before real-time solutions can be used in a wide scale, additional investigations are needed.
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay-Rivas, E.
1977-01-01
The considered scheme makes it possible to determine an unstable steady state solution in cases in which, because of lack of symmetry, such a solution cannot be obtained analytically, and other time integration or relaxation schemes, because of instability, fail to converge. The iterative solution of a single complex equation is discussed and a nonlinear system of equations is considered. Described applications of the scheme are related to a steady state solution with shear instability, an unstable nonlinear Ekman boundary layer, and the steady state solution of a baroclinic atmosphere with asymmetric forcing. The scheme makes use of forward and backward time integrations of the original spatial differential operators and of an approximation of the adjoint operators. Only two computations of the time derivative per iteration are required.
Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints
NASA Technical Reports Server (NTRS)
Calise, A. J.; Corban, J. E.
1990-01-01
The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.
NASA Astrophysics Data System (ADS)
Kaila, M. M.; Russell, G. J.
2000-12-01
We have designed a liquid nitrogen cooled detector where a thermoelectric feedback is combined with electrothermal feedback to produce an improvement of three orders of magnitude in the response time of the detector. We have achieved this by considering a parallel resistance combination of thermoelectric and High Temperature Superconductor (HTSC) material legs of an approximate geometry 1mm /spl times/ 2 mm /spl times/ 1micron operated at 80K. One end of this thermocouple acts as the sensitive area where the radiation is absorbed. The other end remains unexposed and stays basically at substrate temperature. It is found that micron thick films in our bolometer produce characteristics very close to those found for nanometer thick films required in semiconductor detectors and Low Temperature Superconductor (LTSC) bolometers.
Traveltime and dispersion in the Potomac River, Cumberland, Maryland, to Washington, D.C.
Taylor, Kenneth R.; James, Robert W.; Helinsky, Bernard M.
1985-01-01
A travel-time and dispersion study using rhodamine dye was conducted on the Potomac River between Cumberland, Maryland, and Washington, D.C., a distance of 189 miles. The flow during the study was at approximately the 90-percent flow-duration level. A similar study was conducted by Wilson and Forrest in 1964 at a flow duration of approximately 60 percent. The two sets of data were used to develop a generalized procedure for predicting travel-times and downstream concentrations resulting from spillage of water-soluble substances at any point along the river. The procedure will allow the user to calculate travel-time and concentration data for almost any spillage problem that occurs during periods of relatively steady flow between 50- and 95-percent flow duration. A new procedure for calculating unit peak concentration was derived. The new procedure depends on an analogy between a time-concentration curve and a scalene triangle. As a result of this analogy, the unit peak concentration can be expressed in terms of the length of the _lye or contaminant cloud. The new procedure facilitates the calculation of unit peak concentration for long reaches of river. Previously, there was no way to link unit peak concentration curves for studies in which the river was divided into subreaches for study. Variable dispersive characteristics caused mainly by low-head dams precluded useful extrapolation of the unit peak-concentration attenuation curves, as has been done in previous studies. The procedure is applied to a hypothetical situation in which 20,000 pounds of contaminant is spilled at a railroad crossing at Magnolia, West Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach Point of Rocks, Maryland (110 river miles downstream), are 295, 375, and 540 hours respectively, during a period when flow is at the 80-percent flow-duration level. The peak conservative concentration would be approximately 340 micrograms per liter at Point of Rocks.
UNAERO: A package of FORTRAN subroutines for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1985-01-01
This report serves as an instruction and maintenance manual for a collection of CDC CYBER FORTRAN IV subroutines for approximating the unsteady aerodynamic forces in the time domain. The result is a set of constant-coefficient first-order differential equations that approximate the dynamics of the vehicle. Provisions are included for adjusting the number of modes used for calculating the approximations so that an accurate approximation is generated. The number of data points at different values of reduced frequency can also be varied to adjust the accuracy of the approximation over the reduced-frequency range. The denominator coefficients of the approximation may be calculated by means of a gradient method or a least-squares approximation technique. Both the approximation methods use weights on the residual error. A new set of system equations, at a different dynamic pressure, can be generated without the approximations being recalculated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, J; Bradshaw, B; Godette, K
Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2008-03-01
The differential interference contrast (DIC) microscope is commonly used for the visualization of live biological specimens. It enables the view of the transparent specimens while preserving their viability, being a non-invasive modality. Fertility clinics often use the DIC microscope for evaluation of human embryos quality. Towards quantification and reconstruction of the visualized specimens, an image formation model for DIC imaging is sought and the interaction of light waves with biological matter is examined. In many image formation models the light-matter interaction is expressed via the first Born approximation. The validity region of this approximation is defined in a theoretical bound which limits its use to very small specimens with low dielectric contrast. In this work the Born approximation is investigated via the Helmholtz equation, which describes the interaction between the specimen and light. A solution on the lens field is derived using the Gaussian Legendre quadrature formulation. This numerical scheme is considered both accurate and efficient and has shortened significantly the computation time as compared to integration methods that required a great amount of sampling for satisfying the Whittaker - Shannon sampling theorem. By comparing the numerical results with the theoretical values it is shown that the theoretical bound is not directly relevant to microscopic imaging and is far too limiting. The numerical exhaustive experiments show that the Born approximation is inappropriate for modeling the visualization of thick human embryos.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
An Ultraviolet-Excess Optical Candidate for the Luminous Globular Cluster X-Ray Source in NGC 1851
NASA Technical Reports Server (NTRS)
Deutsch, Eric W.; Anderson, Scott F.; Margon, Bruce; Downes, Ronald A.
1996-01-01
The intense, bursting X-ray source in the globular cluster NGC 1851 was one of the first cluster sources discovered, but has remained optically unidentified for 25 years. We report here on results from Hubble Space Telescope WFPC2 multicolor images in NGC 1851. Our high spatial resolution images resolve approximately 200 objects in the 3 minute radius Einstein X-ray error circle, 40 times as many as in previous ground-based work. A color-magnitude diagram of the cluster clearly reveals a markedly UV-excess object with B approximately 21, (U - B) approximately -0.9, only 2 minutes from the X-ray position. The UV-excess candidate is 0.12 minutes distant from a second, unremarkable star that is 0.5 mag brighter in B; thus ground-based studies of this field are probably impractical. Three other UV-excess objects are also present among the approximately 16,000 objects in the surveyed region of the cluster, leaving an approximately 5% probability that a UV-excess object has fallen in the X-ray error circle by chance. No variability of the candidate is seen in these data, although a more complete study is required. If this object is in fact the counterpart of the X-ray source, previous inferences that some globular cluster X-ray sources are optically subluminous with respect to low-mass X-ray binaries in the field are now strengthened.
Xiaofeng Yang; Guanghao Sun; Ishibashi, Koichiro
2017-07-01
The non-contact measurement of the respiration rate (RR) and heart rate (HR) using a Doppler radar has attracted more attention in the field of home healthcare monitoring, due to the extremely low burden on patients, unconsciousness and unconstraint. Most of the previous studies have performed the frequency-domain analysis of radar signals to detect the respiration and heartbeat frequency. However, these procedures required long period time (approximately 30 s) windows to obtain a high-resolution spectrum. In this study, we propose a time-domain peak detection algorithm for the fast acquisition of the RR and HR within a breathing cycle (approximately 5 s), including inhalation and exhalation. Signal pre-processing using an analog band-pass filter (BPF) that extracts respiration and heartbeat signals was performed. Thereafter, the HR and RR were calculated using a peak position detection method, which was carried out via LABVIEW. To evaluate the measurement accuracy, we measured the HR and RR of seven subjects in the laboratory. As a reference of HR and RR, the persons wore contact sensors i.e., an electrocardiograph (ECG) and a respiration band. The time domain peak-detection algorithm, based on the Doppler radar, exhibited a significant correlation coefficient of HR of 0.92 and a correlation coefficient of RR of 0.99, between the ECG and respiration band, respectively.
Search for possible solar influences in Ra-226 decays
NASA Astrophysics Data System (ADS)
Stancil, Daniel D.; Balci Yegen, Sümeyra; Dickey, David A.; Gould, Chris R.
Measurements of Ra-226 activity from eight HPGe gamma ray detectors at the NC State University PULSTAR Reactor were analyzed for evidence of periodic variations, with particular attention to annual variations. All measurements were made using the same reference source, and data sets were of varying length taken over the time period from September 1996 through August 2014. Clear evidence of annual variations was observed in data from four of the detectors. Short time periodograms from the data sets suggest temporal variability of both the amplitude and frequency of these variations. The annual variations in two of the data sets show peak values near the first of February, while surprisingly, the annual variations in the other two are roughly out of phase with the first two. Three of the four detectors exhibited annual variations over approximately the same time period. A joint statistic constructed by combining spectra from these three shows peaks approximating the frequencies of solar r-mode oscillations with νR = 11.74 cpy, m = 1, and l = 3, 5, 6. The fact that similar variations were not present in all detectors covering similar time periods rules out variations in activity as the cause, and points to differing sensitivities to unspecified environmental parameters instead. In addition to seasonal variations, the modulation of environmental parameters by solar processes remains a possible explanation of periodogram features, but without requiring new physics.
Frozen Gaussian approximation for 3D seismic tomography
NASA Astrophysics Data System (ADS)
Chai, Lihui; Tong, Ping; Yang, Xu
2018-05-01
Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.