A Comparison of Two Skip Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Rea, Jeremy R.; Putnam, Zachary R.
2007-01-01
The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control system saturation. The comparison of the performance criteria has led to a down select and guidance merger that will take the best ideas from each algorithm to create one skip entry guidance algorithm for the Orion vehicle.
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
NASA is scheduled to launch the Orion spacecraft atop the Space Launch System on Exploration Mission 1 in late 2018. When Orion returns from its lunar sortie, it will encounter Earth's atmosphere with speeds in excess of 11 kilometers per second, and Orion will attempt its first precision-guided skip entry. A suite of flight software algorithms collectively called the Entry Monitor has been developed in order to enhance crew situational awareness and enable high levels of onboard autonomy. The Entry Monitor determines the vehicle capability footprint in real-time, provides manual piloting cues, evaluates landing target feasibility, predicts the ballistic instantaneous impact point, and provides intelligent recommendations for alternative landing sites if the primary landing site is not achievable. The primary engineering challenges of the Entry Monitor is in the algorithmic implementation in making a highly reliable, efficient set of algorithms suitable for onboard applications.
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2011 CFR
2011-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria
Code of Federal Regulations, 2012 CFR
2012-04-01
... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...
17 CFR Appendix A to Part 38 - Guidance on Compliance With Designation Criteria
Code of Federal Regulations, 2011 CFR
2011-04-01
...-matching algorithm and order entry procedures. An application involving a trade-matching algorithm that is... algorithm. (b) A designated contract market's specifications on initial and periodic objective testing and...
17 CFR Appendix A to Part 38 - Guidance on Compliance With Designation Criteria
Code of Federal Regulations, 2012 CFR
2012-04-01
...-matching algorithm and order entry procedures. An application involving a trade-matching algorithm that is... algorithm. (b) A designated contract market's specifications on initial and periodic objective testing and...
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Trajectory Guidance for Mars Robotic Precursors: Aerocapture, Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.; Zumwalt, Carlie; Garcia-Llama, Eduardo; Powell, Richard; Shidner, Jeremy
2011-01-01
Future crewed missions to Mars require improvements in landed mass capability beyond that which is possible using state-of-the-art Mars Entry, Descent, and Landing (EDL) systems. Current systems are capable of an estimated maximum landed mass of 1-1.5 metric tons (MT), while human Mars studies require 20-40 MT. A set of technologies were investigated by the EDL Systems Analysis (SA) project to assess the performance of candidate EDL architectures. A single architecture was selected for the design of a robotic precursor mission, entitled Exploration Feed Forward (EFF), whose objective is to demonstrate these technologies. In particular, inflatable aerodynamic decelerators (IADs) and supersonic retro-propulsion (SRP) have been shown to have the greatest mass benefit and extensibility to future exploration missions. In order to evaluate these technologies and develop the mission, candidate guidance algorithms have been coded into the simulation for the purposes of studying system performance. These guidance algorithms include aerocapture, entry, and powered descent. The performance of the algorithms for each of these phases in the presence of dispersions has been assessed using a Monte Carlo technique.
Entry Guidance for the Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Lu, Ping
1999-01-01
The X-33 Advanced Technology Demonstrator is a half-scale prototype developed to test the key technologies needed for a full-scale single-stage reusable launch vehicle (RLV). The X-33 is a suborbital vehicle that will be launched vertically, and land horizontally. The goals of this research were to develop an alternate entry guidance scheme for the X-33 in parallel to the actual X-33 entry guidance algorithms, provide comparative and complementary study, and identify potential new ways to improve entry guidance performance. Toward these goals, the nominal entry trajectory is defined by a piecewise linear drag-acceleration-versus-energy profile, which is in turn obtained by the solution of a semi-analytical parameter optimization problem. The closed-loop guidance is accomplished by tracking the nominal drag profile with primarily bank-angle modulation on-board. The bank-angle is commanded by a single full-envelope nonlinear trajectory control law. Near the end of the entry flight, the guidance logic is switched to heading control in order to meet strict conditions at the terminal area energy management interface. Two methods, one on ground-track control and the other on heading control, were proposed and examined for this phase of entry guidance where lateral control is emphasized. Trajectory dispersion studies were performed to evaluate the effectiveness of the entry guidance algorithms against a number of uncertainties including those in propulsion system, atmospheric properties, winds, aerodynamics, and propellant loading. Finally, a new trajectory-regulation method is introduced at the end as a promising precision entry guidance method. The guidance principle is very different and preliminary application in X-33 entry guidance simulation showed high precision that is difficult to achieve by existing methods.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
On-Board Entry Trajectory Planning Expanded to Sub-orbital Flight
NASA Technical Reports Server (NTRS)
Lu, Ping; Shen, Zuojun
2003-01-01
A methodology for on-board planning of sub-orbital entry trajectories is developed. The algorithm is able to generate in a time frame consistent with on-board environment a three-degree-of-freedom (3DOF) feasible entry trajectory, given the boundary conditions and vehicle modeling. This trajectory is then tracked by feedback guidance laws which issue guidance commands. The current trajectory planning algorithm complements the recently developed method for on-board 3DOF entry trajectory generation for orbital missions, and provides full-envelope autonomous adaptive entry guidance capability. The algorithm is validated and verified by extensive high fidelity simulations using a sub-orbital reusable launch vehicle model and difficult mission scenarios including failures and aborts.
Yilmaz, Emel Maden; Güntert, Peter
2015-09-01
An algorithm, CYLIB, is presented for converting molecular topology descriptions from the PDB Chemical Component Dictionary into CYANA residue library entries. The CYANA structure calculation algorithm uses torsion angle molecular dynamics for the efficient computation of three-dimensional structures from NMR-derived restraints. For this, the molecules have to be represented in torsion angle space with rotations around covalent single bonds as the only degrees of freedom. The molecule must be given a tree structure of torsion angles connecting rigid units composed of one or several atoms with fixed relative positions. Setting up CYANA residue library entries therefore involves, besides straightforward format conversion, the non-trivial step of defining a suitable tree structure of torsion angles, and to re-order the atoms in a way that is compatible with this tree structure. This can be done manually for small numbers of ligands but the process is time-consuming and error-prone. An automated method is necessary in order to handle the large number of different potential ligand molecules to be studied in drug design projects. Here, we present an algorithm for this purpose, and show that CYANA structure calculations can be performed with almost all small molecule ligands and non-standard amino acid residues in the PDB Chemical Component Dictionary.
Genetic Algorithm-Based Optimization to Match Asteroid Energy Deposition Curves
NASA Technical Reports Server (NTRS)
Tarano, Ana; Mathias, Donovan; Wheeler, Lorien; Close, Sigrid
2018-01-01
An asteroid entering Earth's atmosphere deposits energy along its path due to thermal ablation and dissipative forces that can be measured by ground-based and spaceborne instruments. Inference of pre-entry asteroid properties and characterization of the atmospheric breakup is facilitated by using an analytic fragment-cloud model (FCM) in conjunction with a Genetic Algorithm (GA). This optimization technique is used to inversely solve for the asteroid's entry properties, such as diameter, density, strength, velocity, entry angle, and strength scaling, from simulations using FCM. The previous parameters' fitness evaluation involves minimizing error to ascertain the best match between the physics-based calculated energy deposition and the observed meteors. This steady-state GA provided sets of solutions agreeing with literature, such as the meteor from Chelyabinsk, Russia in 2013 and Tagish Lake, Canada in 2000, which were used as case studies in order to validate the optimization routine. The assisted exploration and exploitation of this multi-dimensional search space enables inference and uncertainty analysis that can inform studies of near-Earth asteroids and consequently improve risk assessment.
NASA Astrophysics Data System (ADS)
Li, Shuang; Peng, Yuming
2012-01-01
In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.
Automated identification of drug and food allergies entered using non-standard terminology.
Epstein, Richard H; St Jacques, Paul; Stockin, Michael; Rothman, Brian; Ehrenfeld, Jesse M; Denny, Joshua C
2013-01-01
An accurate computable representation of food and drug allergy is essential for safe healthcare. Our goal was to develop a high-performance, easily maintained algorithm to identify medication and food allergies and sensitivities from unstructured allergy entries in electronic health record (EHR) systems. An algorithm was developed in Transact-SQL to identify ingredients to which patients had allergies in a perioperative information management system. The algorithm used RxNorm and natural language processing techniques developed on a training set of 24 599 entries from 9445 records. Accuracy, specificity, precision, recall, and F-measure were determined for the training dataset and repeated for the testing dataset (24 857 entries from 9430 records). Accuracy, precision, recall, and F-measure for medication allergy matches were all above 98% in the training dataset and above 97% in the testing dataset for all allergy entries. Corresponding values for food allergy matches were above 97% and above 93%, respectively. Specificities of the algorithm were 90.3% and 85.0% for drug matches and 100% and 88.9% for food matches in the training and testing datasets, respectively. The algorithm had high performance for identification of medication and food allergies. Maintenance is practical, as updates are managed through upload of new RxNorm versions and additions to companion database tables. However, direct entry of codified allergy information by providers (through autocompleters or drop lists) is still preferred to post-hoc encoding of the data. Data tables used in the algorithm are available for download. A high performing, easily maintained algorithm can successfully identify medication and food allergies from free text entries in EHR systems.
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
Navigation Strategy for the Mars 2001 Lander Mission
NASA Technical Reports Server (NTRS)
Mase, Robert A.; Spencer, David A.; Smith, John C.; Braun, Robert D.
2000-01-01
The Mars Surveyor Program (MSP) is an ongoing series of missions designed to robotically study, map and search for signs of life on the planet Mars. The MSP 2001 project will advance the effort by sending an orbiter, a lander and a rover to the red planet in the 2001 opportunity. Each vehicle will carry a science payload that will Investigate the Martian environment on both a global and on a local scale. Although this mission will not directly search for signs of life, or cache samples to be returned to Earth, it will demonstrate certain enabling technologies that will be utilized by the future Mars Sample Return missions. One technology that is needed for the Sample Return mission is the capability to place a vehicle on the surface within several kilometers of the targeted landing site. The MSP'01 Lander will take the first major step towards this type of precision landing at Mars. Significant reduction of the landed footprint will be achieved through two technology advances. The first, and most dramatic, is hypersonic aeromaneuvering; the second is improved approach navigation. As a result, the guided entry will produce in a footprint that is only tens of kilometers, which is an order of magnitude improvement over the Pathfinder and Mars Polar Lander ballistic entries. This reduction will significantly enhance scientific return by enabling the potential selection of otherwise unreachable landing sites with unique geologic interest and public appeal. A landed footprint reduction from hundreds to tens of kilometers is also a milestone on the path towards human exploration of Mars, where the desire is to place multiple vehicles within several hundred meters of the planned landing site. Hypersonic aeromaneuvering is an extension of the atmospheric flight goals of the previous landed missions, Pathfinder and Mars Polar Lander (MPL), that utilizes aerodynamic lift and an autonomous guidance algorithm while in the upper atmosphere. The onboard guidance algorithm will control the direction of the lift vector, via bank angle modulation, to keep the vehicle on the desired trajectory. While numerous autonomous guidance algorithms have been developed for use during hypersonic flight at Earth, this will be the first flight of an autonomously directed lifting entry vehicle at Mars. However, without sufficient control and knowledge of the atmospheric entry conditions, the guidance algorithm will not perform effectively. The goal of the interplanetary navigation strategy is to deliver the spacecraft to the desired entry condition with sufficient accuracy and knowledge to enable satisfactory guidance algorithm performance. Specifically, the entry flight path angle must not exceed 0.27 deg. to a 3 sigma confidence level. Entry errors will contribute directly to the size of the landed footprint and the most significant component is entry flight path angle. The size of the entry corridor is limited on the shallow side by integrated heating constraints, and on the steep side by deceleration (g-load) and terminal descent propellant. In order to meet this tight constraint it is necessary to place a targeting maneuver seven hours prior to the time of entry. At this time the trajectory knowledge will be quite accurate, and the effects of maneuver execution errors will be small. The drawback is that entry accuracy is dependent on the success of this final late maneuver. Because propulsive maneuvers are critical events, it is desirable to minimize their occurrence and provide the flight team with as much response time as possible in the event of a spacecraft fault. A mission critical maneuver at Entry - 7 hours does not provide much fault tolerance, and it is desirable to provide a strategy that minimizes reliance on this maneuver. This paper will focus on the Improvements in interplanetary navigation that will decrease entry errors and will reduce the landed footprint, even in the absence of aeromaneuvering. The easiest to take advantage of are Improvements In the knowledge of the Mars ephemeris and gravity field due to the MGS and MSP'98 missions. Improvements In data collection and reduction techniques such as "precislon ranging' and near-simultaneous tracking will also be utilized. In addition to precise trajectory control, a robust strategy for communications and flight operations must also be demonstrated. The result Is a navigation and communications strategy on approach that utilizes optimal maneuver placement to take advantage of trajectory knowledge, minimizes risk for the flight operations team, is responsive to spacecraft hardware limitations, and achieves the entry corridor. The MSP2001 mission Is managed at JPL under the auspices of the Mars Exploration Directorate. The spacecraft flight elements are built and managed by Lockheed-Martin Astronautics in Denver, Colorado.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
PredGuid+A: Orion Entry Guidance Modified for Aerocapture
NASA Technical Reports Server (NTRS)
Lafleur, Jarret
2013-01-01
PredGuid+A software was developed to enable a unique numerical predictor-corrector aerocapture guidance capability that builds on heritage Orion entry guidance algorithms. The software can be used for both planetary entry and aerocapture applications. Furthermore, PredGuid+A implements a new Delta-V minimization guidance option that can take the place of traditional targeting guidance and can result in substantial propellant savings. PredGuid+A allows the user to set a mode flag and input a target orbit's apoapsis and periapsis. Using bank angle control, the guidance will then guide the vehicle to the appropriate post-aerocapture orbit using one of two algorithms: Apoapsis Targeting or Delta-V Minimization (as chosen by the user). Recently, the PredGuid guidance algorithm was adapted for use in skip-entry scenarios for NASA's Orion multi-purpose crew vehicle (MPCV). To leverage flight heritage, most of Orion's entry guidance routines are adapted from the Apollo program.
Lunar Entry Downmode Options for Orion
NASA Technical Reports Server (NTRS)
Smith, Kelly; Rea, Jeremy
2016-01-01
Traditional ballistic entry does not scale well to higher energy entry trajectories. Clutch algorithm is a two-stage approach with the capture stage and load relief stage. Clutch may offer expansion of the operational entry corridor. Clutch is a candidate solution for Exploration Mission-2's degraded entry mode.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
A trajectory generation framework for modeling spacecraft entry in MDAO
NASA Astrophysics Data System (ADS)
D`Souza, Sarah N.; Sarigul-Klijn, Nesrin
2016-04-01
In this paper a novel trajectory generation framework was developed that optimizes trajectory event conditions for use in a Generalized Entry Guidance algorithm. The framework was developed to be adaptable via the use of high fidelity equations of motion and drag based analytical bank profiles. Within this framework, a novel technique was implemented that resolved the sensitivity of the bank profile to atmospheric non-linearities. The framework's adaptability was established by running two different entry bank conditions. Each case yielded a reference trajectory and set of transition event conditions that are flight feasible and implementable in a Generalized Entry Guidance algorithm.
NASA Technical Reports Server (NTRS)
Dieriam, Todd A.
1990-01-01
Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.
Houston, Charles; Tzortzis, Konstantinos N; Roney, Caroline; Saglietto, Andrea; Pitcher, David S; Cantwell, Chris D; Chowdhury, Rasheda A; Ng, Fu Siong; Peters, Nicholas S; Dupont, Emmanuel
2018-06-01
Fibrillation is the most common arrhythmia observed in clinical practice. Understanding of the mechanisms underlying its initiation and maintenance remains incomplete. Functional re-entries are potential drivers of the arrhythmia. Two main concepts are still debated, the "leading circle" and the "spiral wave or rotor" theories. The homogeneous subclone of the HL1 atrial-derived cardiomyocyte cell line, HL1-6, spontaneously exhibits re-entry on a microscopic scale due to its slow conduction velocity and the presence of triggers, making it possible to examine re-entry at the cellular level. We therefore investigated the re-entry cores in cell monolayers through the use of fluorescence optical mapping at high spatiotemporal resolution in order to obtain insights into the mechanisms of re-entry. Re-entries in HL1-6 myocytes required at least two triggers and a minimum colony area to initiate (3.5 to 6.4 mm 2 ). After electrical activity was completely stopped and re-started by varying the extracellular K + concentration, re-entries never returned to the same location while 35% of triggers re-appeared at the same position. A conduction delay algorithm also allows visualisation of the core of the re-entries. This work has revealed that the core of re-entries is conduction blocks constituted by lines and/or groups of cells rather than the round area assumed by the other concepts of functional re-entry. This highlights the importance of experimentation at the microscopic level in the study of re-entry mechanisms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Accurate predictor-corrector skip entry guidance for low lift-to-drag ratio spacecraft
NASA Astrophysics Data System (ADS)
Enmi, Y.; Qian, W.; He, K.; Di, D.
2018-06-01
This paper develops numerical predictor-corrector skip en try guidance for vehicles with low lift-to-drag L/D ratio during the skip entry phase of a Moon return mission. The guidance method is composed of two parts: trajectory planning before entry and closed-loop gu idance during skip entry. The result of trajectory planning before entry is able to present an initial value for predictor-corrector algorithm in closed-loop guidance for fast convergence. The magnitude of bank angle, which is parameterized as a linear function of the range-to-go, is modulated to satisfy the downrange requirements. The sign of the bank ang le is determined by the bank-reversal logic. The predictor-corrector algorithm repeatedly applied onboard in each guidance cycle to realize closed-loop guidance in the skip entry phase. The effectivity of the proposed guidance is validated by simulations in nominal conditions, including skip entry, loft entry, and direct entry, as well as simulations in dispersion conditions considering the combination disturbance of the entry interface, the aerodynamic coefficients, the air density, and the mass of the vehicle.
Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data
Bergeron, Bryan P.; Greenes, Robert A.
1987-01-01
Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.
A Note on Alternating Minimization Algorithm for the Matrix Completion Problem
Gamarnik, David; Misra, Sidhant
2016-06-06
Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Afzal, Zubair; Pons, Ewoud; Kang, Ning; Sturkenboom, Miriam C J M; Schuemie, Martijn J; Kors, Jan A
2014-11-29
In order to extract meaningful information from electronic medical records, such as signs and symptoms, diagnoses, and treatments, it is important to take into account the contextual properties of the identified information: negation, temporality, and experiencer. Most work on automatic identification of these contextual properties has been done on English clinical text. This study presents ContextD, an adaptation of the English ConText algorithm to the Dutch language, and a Dutch clinical corpus. We created a Dutch clinical corpus containing four types of anonymized clinical documents: entries from general practitioners, specialists' letters, radiology reports, and discharge letters. Using a Dutch list of medical terms extracted from the Unified Medical Language System, we identified medical terms in the corpus with exact matching. The identified terms were annotated for negation, temporality, and experiencer properties. To adapt the ConText algorithm, we translated English trigger terms to Dutch and added several general and document specific enhancements, such as negation rules for general practitioners' entries and a regular expression based temporality module. The ContextD algorithm utilized 41 unique triggers to identify the contextual properties in the clinical corpus. For the negation property, the algorithm obtained an F-score from 87% to 93% for the different document types. For the experiencer property, the F-score was 99% to 100%. For the historical and hypothetical values of the temporality property, F-scores ranged from 26% to 54% and from 13% to 44%, respectively. The ContextD showed good performance in identifying negation and experiencer property values across all Dutch clinical document types. Accurate identification of the temporality property proved to be difficult and requires further work. The anonymized and annotated Dutch clinical corpus can serve as a useful resource for further algorithm development.
Chemotherapy Order Entry by a Clinical Support Pharmacy Technician in an Outpatient Medical Day Unit
Neville, Heather; Broadfield, Larry; Harding, Claudia; Heukshorst, Shelley; Sweetapple, Jennifer; Rolle, Megan
2016-01-01
Background: Pharmacy technicians are expanding their scope of practice, often in partnership with pharmacists. In oncology, such a shift in responsibilities may lead to workflow efficiencies, but may also cause concerns about patient risk and medication errors. Objectives: The primary objective was to compare the time spent on order entry and order-entry checking before and after training of a clinical support pharmacy technician (CSPT) to perform chemotherapy order entry. The secondary objectives were to document workflow interruptions and to assess medication errors. Methods: This before-and-after observational study investigated chemotherapy order entry for ambulatory oncology patients. Order entry was performed by pharmacists before the process change (phase 1) and by 1 CSPT after the change (phase 2); order-entry checking was performed by a pharmacist during both phases. The tasks were timed by an independent observer using a personal digital assistant. A convenience sample of 125 orders was targeted for each phase. Data were exported to Microsoft Excel software, and timing differences for each task were tested with an unpaired t test. Results: Totals of 143 and 128 individual orders were timed for order entry during phase 1 (pharmacist) and phase 2 (CSPT), respectively. The mean total time to perform order entry was greater during phase 1 (1:37 min versus 1:20 min; p = 0.044). Totals of 144 and 122 individual orders were timed for order-entry checking (by a pharmacist) in phases 1 and 2, respectively, and there was no difference in mean total time for order-entry checking (1:21 min versus 1:20 min; p = 0.69). There were 33 interruptions not related to order entry (totalling 39:38 min) during phase 1 and 25 interruptions (totalling 30:08 min) during phase 2. Three errors were observed during order entry in phase 1 and one error during order-entry checking in phase 2; the errors were rated as having no effect on patient care. Conclusions: Chemotherapy order entry by a trained CSPT appeared to be just as safe and efficient as order entry by a pharmacist. Changes in pharmacy technicians’ scope of practice could increase the amount of time available for pharmacists to provide direct patient care in the oncology setting. PMID:27402999
Neville, Heather; Broadfield, Larry; Harding, Claudia; Heukshorst, Shelley; Sweetapple, Jennifer; Rolle, Megan
2016-01-01
Pharmacy technicians are expanding their scope of practice, often in partnership with pharmacists. In oncology, such a shift in responsibilities may lead to workflow efficiencies, but may also cause concerns about patient risk and medication errors. The primary objective was to compare the time spent on order entry and order-entry checking before and after training of a clinical support pharmacy technician (CSPT) to perform chemotherapy order entry. The secondary objectives were to document workflow interruptions and to assess medication errors. This before-and-after observational study investigated chemotherapy order entry for ambulatory oncology patients. Order entry was performed by pharmacists before the process change (phase 1) and by 1 CSPT after the change (phase 2); order-entry checking was performed by a pharmacist during both phases. The tasks were timed by an independent observer using a personal digital assistant. A convenience sample of 125 orders was targeted for each phase. Data were exported to Microsoft Excel software, and timing differences for each task were tested with an unpaired t test. Totals of 143 and 128 individual orders were timed for order entry during phase 1 (pharmacist) and phase 2 (CSPT), respectively. The mean total time to perform order entry was greater during phase 1 (1:37 min versus 1:20 min; p = 0.044). Totals of 144 and 122 individual orders were timed for order-entry checking (by a pharmacist) in phases 1 and 2, respectively, and there was no difference in mean total time for order-entry checking (1:21 min versus 1:20 min; p = 0.69). There were 33 interruptions not related to order entry (totalling 39:38 min) during phase 1 and 25 interruptions (totalling 30:08 min) during phase 2. Three errors were observed during order entry in phase 1 and one error during order-entry checking in phase 2; the errors were rated as having no effect on patient care. Chemotherapy order entry by a trained CSPT appeared to be just as safe and efficient as order entry by a pharmacist. Changes in pharmacy technicians' scope of practice could increase the amount of time available for pharmacists to provide direct patient care in the oncology setting.
Six-degree-of-freedom guidance and control-entry analysis of the HL-20
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1993-01-01
The ability of the HL-20 lifting body to fly has been evaluated for an automated entry from atmospheric interface to landing. This evaluation was required to demonstrate that not only successful touchdown conditions would be possible for this low lift-to-drag-ratio vehicle, but also the vehicle would not exceed its design dynamic pressure limit of 400 psf during entry. This dynamic pressure constraint limit, coupled with limited available pitch-control authority at low supersonic speeds, restricts the available maneuvering capability for the HL-20 to acquire the runway. One result of this analysis was that this restrictive maneuvering capability does not allow the use of a model-following atmospheric entry-guidance algorithm, such as that used by the Space Shuttle, but instead requires a more adaptable guidance algorithm. Therefore, for this analysis, a predictor-corrector guidance algorithm was developed that would provide successful touchdown conditions while not violating the dynamic pressure constraint. A flight-control system was designed and incorporated, along with the predictor-corrector guidance algorithm, into a six-DOF simulation. which showed that the HL-20 remained controllable and could reach the landing site and execute a successful landing under all off-nominal conditions simulated.
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
Negatu, Beyene; Vermeulen, Roel; Mekonnen, Yalemtshay; Kromhout, Hans
2016-07-01
To develop an inexpensive and easily adaptable semi-quantitative exposure assessment method to characterize exposure to pesticide in applicators and re-entry farmers and farm workers in Ethiopia. Two specific semi-quantitative exposure algorithms for pesticides applicators and re-entry workers were developed and applied to 601 farm workers employed in 3 distinctly different farming systems [small-scale irrigated, large-scale greenhouses (LSGH), and large-scale open (LSO)] in Ethiopia. The algorithm for applicators was based on exposure-modifying factors including application methods, farm layout (open or closed), pesticide mixing conditions, cleaning of spraying equipment, intensity of pesticide application per day, utilization of personal protective equipment (PPE), personal hygienic behavior, annual frequency of application, and duration of employment at the farm. The algorithm for re-entry work was based on an expert-based re-entry exposure intensity score, utilization of PPE, personal hygienic behavior, annual frequency of re-entry work, and duration of employment at the farm. The algorithms allowed estimation of daily, annual and cumulative lifetime exposure for applicators, and re-entry workers by farming system, by gender, and by age group. For all metrics, highest exposures occurred in LSGH for both applicators and female re-entry workers. For male re-entry workers, highest cumulative exposure occurred in LSO farms. Female re-entry workers appeared to be higher exposed on a daily or annual basis than male re-entry workers, but their cumulative exposures were similar due to the fact that on average males had longer tenure. Factors related to intensity of exposure (like application method and farm layout) were indicated as the main driving factors for estimated potential exposure. Use of personal protection, hygienic behavior, and duration of employment in surveyed farm workers contributed less to the contrast in exposure estimates. This study indicated that farmers' and farm workers' exposure to pesticides can be inexpensively characterized, ranked, and classified. Our method could be extended to assess exposure to specific active ingredients provided that detailed information on pesticides used is available. The resulting exposure estimates will consequently be used in occupational epidemiology studies in Ethiopia and other similar countries with few resources. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Obtaining orthotropic elasticity tensor using entries zeroing method.
NASA Astrophysics Data System (ADS)
Gierlach, Bartosz; Danek, Tomasz
2017-04-01
A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.
Attitude determination with three-axis accelerometer for emergency atmospheric entry
NASA Technical Reports Server (NTRS)
Garcia-Llama, Eduardo (Inventor)
2012-01-01
Two algorithms are disclosed that, with the use of a 3-axis accelerometer, will be able to determine the angles of attack, sideslip and roll of a capsule-type spacecraft prior to entry (at very high altitudes, where the atmospheric density is still very low) and during entry. The invention relates to emergency situations in which no reliable attitude and attitude rate are available. Provided that the spacecraft would not attempt a guided entry without reliable attitude information, the objective of the entry system in such case would be to attempt a safe ballistic entry. A ballistic entry requires three controlled phases to be executed in sequence: First, cancel initial rates in case the spacecraft is tumbling; second, maneuver the capsule to a heat-shield-forward attitude, preferably to the trim attitude, to counteract the heat rate and heat load build up; and third, impart a ballistic bank or roll rate to null the average lift vector in order to prevent prolonged lift down situations. Being able to know the attitude, hence the attitude rate, will allow the control system (nominal or backup, automatic or manual) to cancel any initial angular rates. Also, since a heat-shield forward attitude and the trim attitude can be specified in terms of the angles of attack and sideslip, being able to determine the current attitude in terms of these angles will allow the control system to maneuver the vehicle to the desired attitude. Finally, being able to determine the roll angle will allow for the control of the roll ballistic rate during entry.
Two variants of minimum discarded fill ordering
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai
1991-01-01
It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less
Implementation of home-based medication order entry at a community hospital.
Thorne, Alicia; Williamson, Sarah; Jellison, Tara; Jellison, Chris
2009-11-01
The implementation of a home-based order-entry program at a community hospital is described. Parkview Hospital is a 600-bed, community-based facility located in Fort Wayne, Indiana, that provides 24-hour pharmacy services. The main purpose for establishing a home-based order-entry program was to provide extra pharmacist coverage during the event of a spontaneous order surge in an effort to maintain excellent customer service. A virtual private network (VPN) was created to ensure the security and confidentiality of patients' health care information. The names of volunteer pharmacists who met specific criteria and who were capable of performing home-based order entry were collected. These pharmacists were trained and tested in the home-based order-entry process. When home-based order-entry is needed, the lead pharmacist contacts the pharmacists on the list by telephone. If available, the pharmacists (maximum of three) are notified to log into the Internet, access the VPN, and perform order entry with the same vigilance, confidentiality, and care as they would onsite. Home-based order entry is discontinued when off-trigger points are met. Pharmacists entering orders from home are paid by the time spent conducting order entry. Pharmacists reported that the program was easy to contact home-based order-entry volunteers, there were no problems with logging into the VPNs, and turnaround time was close to our target of 25 minutes. A community-based hospital successfully implemented a home-based medication order-entry program. The program alleviated the shortage of pharmacists during spontaneous surges of medication orders.
NASA Technical Reports Server (NTRS)
Davy, W. C.; Green, M. J.; Lombard, C. K.
1981-01-01
The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.
A Frequency-Domain Substructure System Identification Algorithm
NASA Technical Reports Server (NTRS)
Blades, Eric L.; Craig, Roy R., Jr.
1996-01-01
A new frequency-domain system identification algorithm is presented for system identification of substructures, such as payloads to be flown aboard the Space Shuttle. In the vibration test, all interface degrees of freedom where the substructure is connected to the carrier structure are either subjected to active excitation or are supported by a test stand with the reaction forces measured. The measured frequency-response data is used to obtain a linear, viscous-damped model with all interface-degree of freedom entries included. This model can then be used to validate analytical substructure models. This procedure makes it possible to obtain not only the fixed-interface modal data associated with a Craig-Bampton substructure model, but also the data associated with constraint modes. With this proposed algorithm, multiple-boundary-condition tests are not required, and test-stand dynamics is accounted for without requiring a separate modal test or finite element modeling of the test stand. Numerical simulations are used in examining the algorithm's ability to estimate valid reduced-order structural models. The algorithm's performance when frequency-response data covering narrow and broad frequency bandwidths is used as input is explored. Its performance when noise is added to the frequency-response data and the use of different least squares solution techniques are also examined. The identified reduced-order models are also compared for accuracy with other test-analysis models and a formulation for a Craig-Bampton test-analysis model is also presented.
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis
NASA Technical Reports Server (NTRS)
Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo
2010-01-01
The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.
NASA Astrophysics Data System (ADS)
Gramajo, German G.
This thesis presents an algorithm for a search and coverage mission that has increased autonomy in generating an ideal trajectory while explicitly considering the available energy in the optimization. Further, current algorithms used to generate trajectories depend on the operator providing a discrete set of turning rate requirements to obtain an optimal solution. This work proposes an additional modification to the algorithm so that it optimizes the trajectory for a range of turning rates instead of a discrete set of turning rates. This thesis conducts an evaluation of the algorithm with variation in turn duration, entry-heading angle, and entry point. Comparative studies of the algorithm with existing method indicates improved autonomy in choosing the optimization parameters while producing trajectories with better coverage area and closer final distance to the desired terminal point.
A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osei-Kuffuor, Daniel; Fattebert, Jean-Luc
2014-01-01
Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less
Radiation Modeling in Shock-Tubes and Entry Flows
2009-09-01
the MSRO surface , the local spherical coordinate system with a normal n is entered. Radiation Modeling in Shock-Tubes and Entry Flows 10 - 30 RTO...for each simulated photon group. Radiation Modeling in Shock-Tubes and Entry Flows 10 - 52 RTO-EN-AVT-162 There are two algorithms. In the first...Tubes and Entry Flows RTO-EN-AVT-162 10 - 57 all surfaces of the spatial finite-difference mesh should be calculated. This is illustrated in Figure
NASA Astrophysics Data System (ADS)
Ulrich, Steve; de Lafontaine, Jean
2007-12-01
Upcoming landing missions to Mars will require on-board guidance and control systems in order to meet the scientific requirement of landing safely within hundreds of meters to the target of interest. More specifically, in the longitudinal plane, the first objective of the entry guidance and control system is to bring the vehicle to its specified velocity at the specified altitude (as required for safe parachute deployment), while the second objective is to reach the target position in the longitudinal plane. This paper proposes an improvement to the robustness of the constant flight path angle guidance law for achieving the first objective. The improvement consists of combining this guidance law with a novel adaptive control scheme, derived from the so-called Simple Adaptive Control (SAC) technique. Monte-Carlo simulation results are shown to demonstrate the accuracy and the robustness of the proposed guidance and adaptive control system.
On-Board Generation of Three-Dimensional Constrained Entry Trajectories
NASA Technical Reports Server (NTRS)
Shen, Zuojun; Lu, Ping; Jackson, Scott (Technical Monitor)
2002-01-01
A methodology for very fast design of 3DOF entry trajectories subject to all common inequality and equality constraints is developed. The approach make novel use of the well known quasi-equilibrium glide phenomenon in lifting entry as a center piece for conveniently enforcing the inequality constraints which are otherwise difficulty to handle. The algorithm is able to generate a complete feasible 3DOF entry trajectory, given the entry conditions, values of constraint parameters, and final conditions in about 2 seconds on a PC. Numerical simulations with the X-33 vehicle model for various entry missions to land at Kennedy Space Center will be presented.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
NASA Technical Reports Server (NTRS)
Millman, Daniel R.
2017-01-01
Air Data Systems (FADS) are becoming more prevalent on re-entry vehicles, as evi- denced by the Mars Science Laboratory and the Orion Multipurpose Crew Vehicle. A FADS consists of flush-mounted pressure transducers located at various locations on the fore-body of a flight vehicle or the heat shield of a re-entry capsule. A pressure model converts the pressure readings into useful air data quantities. Two algorithms for converting pressure readings to air data have become predominant- the iterative Least Squares State Estimator (LSSE) and the Triples Algorithm. What follows herein is a new algorithm that takes advantage of the best features of both the Triples Algorithm and the LSSE. This approach employs the potential flow model and strategic differencing of the Triples Algorithm to obtain the defective flight angles; however, the requirements on port placement are far less restrictive, allowing for configurations that are considered optimal for a FADS.
A 3D sequence-independent representation of the protein data bank.
Fischer, D; Tsai, C J; Nussinov, R; Wolfson, H
1995-10-01
Here we address the following questions. How many structurally different entries are there in the Protein Data Bank (PDB)? How do the proteins populate the structural universe? To investigate these questions a structurally non-redundant set of representative entries was selected from the PDB. Construction of such a dataset is not trivial: (i) the considerable size of the PDB requires a large number of comparisons (there were more than 3250 structures of protein chains available in May 1994); (ii) the PDB is highly redundant, containing many structurally similar entries, not necessarily with significant sequence homology, and (iii) there is no clear-cut definition of structural similarity. The latter depend on the criteria and methods used. Here, we analyze structural similarity ignoring protein topology. To date, representative sets have been selected either by hand, by sequence comparison techniques which ignore the three-dimensional (3D) structures of the proteins or by using sequence comparisons followed by linear structural comparison (i.e. the topology, or the sequential order of the chains, is enforced in the structural comparison). Here we describe a 3D sequence-independent automated and efficient method to obtain a representative set of protein molecules from the PDB which contains all unique structures and which is structurally non-redundant. The method has two novel features. The first is the use of strictly structural criteria in the selection process without taking into account the sequence information. To this end we employ a fast structural comparison algorithm which requires on average approximately 2 s per pairwise comparison on a workstation. The second novel feature is the iterative application of a heuristic clustering algorithm that greatly reduces the number of comparisons required. We obtain a representative set of 220 chains with resolution better than 3.0 A, or 268 chains including lower resolution entries, NMR entries and models. The resulting set can serve as a basis for extensive structural classification and studies of 3D recurring motifs and of sequence-structure relationships. The clustering algorithm succeeds in classifying into the same structural family chains with no significant sequence homology, e.g. all the globins in one single group, all the trypsin-like serine proteases in another or all the immunoglobulin-like folds into a third. In addition, unexpected structural similarities of interest have been automatically detected between pairs of chains. A cluster analysis of the representative structures demonstrates the way the "structural universe' is populated.
High-resolution melting (HRM) for genotyping bovine ephemeral fever virus (BEFV).
Erster, Oran; Stram, Rotem; Menasherow, Shopia; Rubistein-Giuni, Marisol; Sharir, Binyamin; Kchinich, Evgeni; Stram, Yehuda
2017-02-02
In recent years there have been several major outbreaks of bovine ephemeral disease in the Middle East, including Israel. Such occurrences raise the need for quick identification of the viruses responsible for the outbreaks, in order to rapidly identify the entry of viruses that do not belong to the Middle-East BEFV lineage. This challenge was met by the development of a high-resolution melt (HRM) assay. The assay is based on the viral G gene sequence and generation of an algorithm that calculates and evaluates the GC content of various fragments. The algorithm was designed to scan 50- to 200-base-long segments in a sliding-window manner, compare and rank them using an Order of Technique of Preference by Similarity to Ideal Solution (TOPSIS) the technique for order preference by similarity to ideal solution technique, according to the differences in GC content of homologous fragments. Two fragments were selected, based on a match to the analysis criteria, in terms of size and GC content. These fragments were successfully used in the analysis to differentiate between different virus lineages, thus facilitating assignment of the viruses' geographical origins. Moreover, the assay could be used for differentiating infected from vaccinated animales (DIVA). The new algorithm may therefore be useful for development of improved genotyping studies for other viruses and possibly other microorganisms. Copyright © 2016. Published by Elsevier B.V.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-15
... calculating the ratio between (i) entered orders, weighted by the distance of the order from the national best... with an ``Order Entry Ratio'' of more than 100. The Order Entry Ratio is calculated, and the Excess Order Fee imposed, on a monthly basis. For each MPID, the Order Entry Ratio is the ratio of (i) the MPID...
Derikx, Joep P M; Erdkamp, Frans L G; Hoofwijk, A G M
2013-01-01
An electronic health record (EHR) should provide 4 key functionalities: (a) documenting patient data; (b) facilitating computerised provider order entry; (c) displaying the results of diagnostic research; and (d) providing support for healthcare providers in the clinical decision-making process.- Computerised provider order entry into the EHR enables the electronic receipt and transfer of orders to ancillary departments, which can take the place of handwritten orders.- By classifying the computer provider order entries according to disorders, digital care pathways can be created. Such care pathways could result in faster and improved diagnostics.- Communicating by means of an electronic instruction document that is linked to a computerised provider order entry facilitates the provision of healthcare in a safer, more efficient and auditable manner.- The implementation of a full-scale EHR has been delayed as a result of economic, technical and legal barriers, as well as some resistance by physicians.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
... . NASDAQ has safeguards in place to protect the market from inadvertent entry of large orders. Each member that requests connectivity through an order entry port is required to specify the maximum order size... and procedures in place to ensure the proper entry and monitoring of orders entered into NASDAQ...
Mixed results in the safety performance of computerized physician order entry.
Metzger, Jane; Welebob, Emily; Bates, David W; Lipsitz, Stuart; Classen, David C
2010-04-01
Computerized physician order entry is a required feature for hospitals seeking to demonstrate meaningful use of electronic medical record systems and qualify for federal financial incentives. A national sample of sixty-two hospitals voluntarily used a simulation tool designed to assess how well safety decision support worked when applied to medication orders in computerized order entry. The simulation detected only 53 percent of the medication orders that would have resulted in fatalities and 10-82 percent of the test orders that would have caused serious adverse drug events. It is important to ascertain whether actual implementations of computerized physician order entry are achieving goals such as improved patient safety.
17 CFR 10.7 - Date of entry of orders.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Date of entry of orders. 10.7 Section 10.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION RULES OF PRACTICE General Provisions § 10.7 Date of entry of orders. In computing any period of time involving the date of...
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong
2017-02-17
As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.
Fumis, Renata Rego Lins; Costa, Eduardo Leite Vieira; Martins, Paulo Sergio; Pizzo, Vladimir; Souza, Ivens Augusto; Schettino, Guilherme de Paula Pinto
2014-01-01
To evaluate the satisfaction of the intensive care unit staff with a computerized physician order entry and to compare the concept of the computerized physician order entry relevance among intensive care unit healthcare workers. We performed a cross-sectional survey to assess the satisfaction of the intensive care unit staff with the computerized physician order entry in a 30-bed medical/surgical adult intensive care unit using a self-administered questionnaire. The questions used for grading satisfaction levels were answered according to a numerical scale that ranged from 1 point (low satisfaction) to 10 points (high satisfaction). The majority of the respondents (n=250) were female (66%) between the ages of 30 and 35 years of age (69%). The overall satisfaction with the computerized physician order entry scored 5.74±2.14 points. The satisfaction was lower among physicians (n=42) than among nurses, nurse technicians, respiratory therapists, clinical pharmacists and diet specialists (4.62±1.79 versus 5.97±2.14, p<0.001); satisfaction decreased with age (p<0.001). Physicians scored lower concerning the potential of the computerized physician order entry for improving patient safety (5.45±2.20 versus 8.09±2.21, p<0.001) and the ease of using the computerized physician order entry (3.83±1.88 versus 6.44±2.31, p<0.001). The characteristics independently associated with satisfaction were the system's user-friendliness, accuracy, capacity to provide clear information, and fast response time. Six months after its implementation, healthcare workers were satisfied, albeit not entirely, with the computerized physician order entry. The overall users' satisfaction with computerized physician order entry was lower among physicians compared to other healthcare professionals. The factors associated with satisfaction included the belief that digitalization decreased the workload and contributed to the intensive care unit quality with a user-friendly and accurate system and that digitalization provided concise information within a reasonable time frame.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
Ordering the Senses in a Monolingual Dictionary Entry.
ERIC Educational Resources Information Center
Gold, David L.
1986-01-01
Reviews issues to be considered in determining the order of meanings for a lexeme in a dictionary entry and compares techniques for deciding order. Types of ordering include importance, frequency, logical ordering, dominant meaning, syntactic, and historical. (MSE)
Multidimensional FEM-FCT schemes for arbitrary time stepping
NASA Astrophysics Data System (ADS)
Kuzmin, D.; Möller, M.; Turek, S.
2003-05-01
The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.
NASA Technical Reports Server (NTRS)
D'souza, Sarah N.; Kinney, David J.; Garcia, Joseph A.; Sarigul-Klijn, Nesrin
2014-01-01
The state-of-the-art in vehicle design decouples flight feasible trajectory generation from the optimization process of an entry spacecraft shape. The disadvantage to this decoupled process is seen when a particular aeroshell does not meet in-flight requirements when integrated into Guidance, Navigation, and Control simulations. It is postulated that the integration of a guidance algorithm into the design process will provide a real-time, rapid trajectory generation technique to enhance the robustness of vehicle design solutions. The potential benefit of this integration is a reduction in design cycles (possible cost savings) and increased accuracy in the aerothermal environment (possible mass savings). This work examines two aspects: 1) the performance of a reference tracking guidance algorithm for five different geometries with the same reference trajectory and 2) the potential of mass savings from improved aerothermal predictions. An Apollo Derived Guidance (ADG) algorithm is used in this study. The baseline geometry and five test case geometries were flown using the same baseline trajectory. The guided trajectory results are compared to separate trajectories determined in a vehicle optimization study conducted for NASA's Mars Entry, Descent, and Landing System Analysis. This study revealed several aspects regarding the potential gains and required developments for integrating a guidance algorithm into the vehicle optimization environment. First, the generation of flight feasible trajectories is only as good as the robustness of the guidance algorithm. The set of dispersed geometries modelled aerodynamic dispersions that ranged from +/-1% to +/-17% and a single extreme case was modelled where the aerodynamics were approximately 80% less than the baseline geometry. The ADG, as expected, was able to guide the vehicle into the aeroshell separation box at the target location for dispersions up to 17%, but failed for the 80% dispersion cases. Finally, the results revealed that including flight feasible trajectories for a set of dispersed geometries has the potential to save mass up to 430 kg.
Re-Entry Point Targeting for LEO Spacecraft using Aerodynamic Drag
NASA Technical Reports Server (NTRS)
Omar, Sanny; Bevilacqua, Riccardo; Fineberg, Laurence; Treptow, Justin; Johnson, Yusef; Clark, Scott
2016-01-01
Most Low Earth Orbit (LEO) spacecraft do not have thrusters and re-enter atmosphere in random locations at uncertain times. Objects pose a risk to persons, property, or other satellites. Has become a larger concern with the recent increase in small satellites. Working on a NASA funded project to design a retractable drag device to expedite de-orbit and target a re-entry location through modulation of the drag area. Will be discussing the re-entry point targeting algorithm here.
2nd International Planetary Probe Workshop
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Martinez, Ed; Arcadi, Marla
2005-01-01
Included are presentations from the 2nd International Planetary Probe Workshop. The purpose of the second workshop was to continue to unite the community of planetary scientists, spacecraft engineers and mission designers and planners; whose expertise, experience and interests are in the areas of entry probe trajectory and attitude determination, and the aerodynamics/aerothermodynamics of planetary entry vehicles. Mars lander missions and the first probe mission to Titan made 2004 an exciting year for planetary exploration. The Workshop addressed entry probe science, engineering challenges, mission design and instruments, along with the challenges of reconstruction of the entry, descent and landing or the aerocapture phases. Topics addressed included methods, technologies, and algorithms currently employed; techniques and results from the rich history of entry probe science such as PAET, Venera/Vega, Pioneer Venus, Viking, Galileo, Mars Pathfinder and Mars MER; upcoming missions such as the imminent entry of Huygens and future Mars entry probes; and new and novel instrumentation and methodologies.
NASA Technical Reports Server (NTRS)
Blissit, J. A.
1986-01-01
Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Keshtkaran, Ali; Barati, Omid
2016-09-01
This study aimed to identify the functional requirements of computerized provider order entry software and design this software in Iran. This study was conducted using review documentation, interview, and focus group discussions in Shiraz University of Medical Sciences, as the medical pole in Iran, in 2013-2015. The study sample consisted of physicians (n = 12) and nurses (n = 2) in the largest hospital in the southern part of Iran and information technology experts (n = 5) in Shiraz University of Medical Sciences. Functional requirements of the computerized provider order entry system were examined in three phases. Finally, the functional requirements were distributed in four levels, and accordingly, the computerized provider order entry software was designed. The software had seven main dimensions: (1) data entry, (2) drug interaction management system, (3) warning system, (4) treatment services, (5) ability to write in software, (6) reporting from all sections of the software, and (7) technical capabilities of the software. The nurses and physicians emphasized quick access to the computerized provider order entry software, order prescription section, and applicability of the software. The software had some items that had not been mentioned in other studies. Ultimately, the software was designed by a company specializing in hospital information systems in Iran. This study was the first specific investigation of computerized provider order entry software design in Iran. Based on the results, it is suggested that this software be implemented in hospitals.
Simultaneous tensor decomposition and completion using factor priors.
Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark
2014-03-01
The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Buck, Gregory M.; Decaro, Anthony D.
2009-01-01
The analysis of effects of the reaction control system jet plumes on aftbody heating of Orion entry capsule is presented. The analysis covered hypersonic continuum part of the entry trajectory. Aerothermal environments at flight conditions were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code and Data Parallel Line Relaxation (DPLR) algorithm code. Results show a marked augmentation of aftbody heating due to roll, yaw and aft pitch thrusters. No significant augmentation is expected due to forward pitch thrusters. Of the conditions surveyed the maximum heat rate on the aftshell is expected when firing a pair of roll thrusters at a maximum deceleration condition.
Functional Equivalence Acceptance Testing of FUN3D for Entry Descent and Landing Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Wood, William A.; Kleb, William L.; Alter, Stephen J.; Glass, Christopher E.; Padilla, Jose F.; Hammond, Dana P.; White, Jeffery A.
2013-01-01
The functional equivalence of the unstructured grid code FUN3D to the the structured grid code LAURA (Langley Aerothermodynamic Upwind Relaxation Algorithm) is documented for applications of interest to the Entry, Descent, and Landing (EDL) community. Examples from an existing suite of regression tests are used to demonstrate the functional equivalence, encompassing various thermochemical models and vehicle configurations. Algorithm modifications required for the node-based unstructured grid code (FUN3D) to reproduce functionality of the cell-centered structured code (LAURA) are also documented. Challenges associated with computation on tetrahedral grids versus computation on structured-grid derived hexahedral systems are discussed.
Test Results for Entry Guidance Methods for Space Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2004-01-01
There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.
Test Results for Entry Guidance Methods for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2003-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.
Joint fMRI analysis and subject clustering using sparse dictionary learning
NASA Astrophysics Data System (ADS)
Kim, Seung-Jun; Dontaraju, Krishna K.
2017-08-01
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
Simultaneous Tensor Decomposition and Completion Using Factor Priors.
Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark
2013-08-27
Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Indication-Based Ordering: A New Paradigm for Glycemic Control in Hospitalized Inpatients
Lee, Joshua; Clay, Brian; Zelazny, Ziband; Maynard, Gregory
2008-01-01
Background Inpatient glycemic control is a constant challenge. Institutional insulin management protocols and structured order sets are commonly advocated but poorly studied. Effective and validated methods to integrate algorithmic protocol guidance into the insulin ordering process are needed. Methods We introduced a basic structured set of computerized insulin orders (Version 1), and later introduced a paper insulin management protocol, to assist users with the order set. Metrics were devised to assess the impact of the protocol on insulin use, glycemic control, and hypoglycemia using pharmacy data and point of care glucose tests. When incremental improvement was seen (as described in the results), Version 2 of the insulin orders was created to further streamline the process. Results The percentage of regimens containing basal insulin improved with Version 1. The percentage of patient days with hypoglycemia improved from 3.68% at baseline to 2.59% with Version 1 plus the paper insulin management protocol, representing a relative risk for hypoglycemic day of 0.70 [confidence interval (CI) 0.62, 0.80]. The relative risk of an uncontrolled (mean glucose over 180 mg/dl) patient stay was reduced to 0.84 (CI 0.77, 0.91) with Version 1 and was reduced further to 0.73 (CI 0.66, 0.81) with the paper protocol. Version 2 used clinician-entered patient parameters to guide protocol-based insulin ordering and simultaneously improved the flexibility and ease of ordering over Version 1. Conclusion Patient parameter and protocol-based clinical decision support, added to computerized provider order entry, has a track record of improving glycemic control indices. This justifies the incorporation of these algorithms into online order management. PMID:19885198
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
NASA Astrophysics Data System (ADS)
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.
Adaptive laser link reconfiguration using constraint propagation
NASA Technical Reports Server (NTRS)
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications networks. Conclusions are presented, including a graphical analysis of results depicting the ordered set of links versus the set of all possible links based on the computed Bit Error Rate (BER). Finally, future research is discussed which includes enhancements to the HALO algorithm, network simulation, and the addition of an intelligent routing algorithm for BP.
Mission and Navigation Design for the 2009 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.
2008-01-01
NASA s Mars Science Laboratory mission will launch the next mobile science laboratory to Mars in the fall of 2009 with arrival at Mars occurring in the summer of 2010. A heat shield, parachute, and rocket-powered descent stage, including a sky crane, will be used to land the rover safely on the surface of Mars. The direction of the atmospheric entry vehicle lift vector will be controlled by a hypersonic entry guidance algorithm to compensate for entry trajectory errors and counteract atmospheric and aerodynamic dispersions. The key challenges for mission design are (1) develop a launch/arrival strategy that provides communications coverage during the Entry, Descent, and Landing phase either from an X-band direct-to-Earth link or from a Ultra High Frequency link to the Mars Reconnaissance Orbiter for landing latitudes between 30 deg North and 30 deg South, while satisfying mission constraints on Earth departure energy and Mars atmospheric entry speed, and (2) generate Earth-departure targets for the Atlas V-541 launch vehicle for the specified launch/arrival strategy. The launch/arrival strategy employs a 30-day baseline launch period and a 27-day extended launch period with varying arrival dates at Mars. The key challenges for navigation design are (1) deliver the spacecraft to the atmospheric entry interface point (Mars radius of 3522.2 km) with an inertial entry flight path angle error of +/- 0.20 deg (3 sigma), (2) provide knowledge of the entry state vector accurate to +/- 2.8 km (3 sigma) in position and +/- 2.0 m/s (3 sigma) in velocity for initializing the entry guidance algorithm, and (3) ensure a 99% probability of successful delivery at Mars with respect to available cruise stage propellant. Orbit determination is accomplished via ground processing of multiple complimentary radiometric data types: Doppler, range, and Delta-Differential One-way Ranging (a Very Long Baseline Interferometry measurement). The navigation strategy makes use of up to five interplanetary trajectory correction maneuvers to achieve entry targeting requirements. The requirements for cruise propellant usage and atmospheric entry targeting and knowledge are met with ample margins.
19 CFR 143.35 - Procedure for electronic entry summary.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 2 2012-04-01 2012-04-01 false Procedure for electronic entry summary. 143.35...; DEPARTMENT OF THE TREASURY (CONTINUED) SPECIAL ENTRY PROCEDURES Electronic Entry Filing § 143.35 Procedure for electronic entry summary. In order to obtain entry summary processing electronically, the filer...
19 CFR 143.35 - Procedure for electronic entry summary.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 2 2011-04-01 2011-04-01 false Procedure for electronic entry summary. 143.35...; DEPARTMENT OF THE TREASURY (CONTINUED) SPECIAL ENTRY PROCEDURES Electronic Entry Filing § 143.35 Procedure for electronic entry summary. In order to obtain entry summary processing electronically, the filer...
19 CFR 143.35 - Procedure for electronic entry summary.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Procedure for electronic entry summary. 143.35...; DEPARTMENT OF THE TREASURY (CONTINUED) SPECIAL ENTRY PROCEDURES Electronic Entry Filing § 143.35 Procedure for electronic entry summary. In order to obtain entry summary processing electronically, the filer...
Automated Re-Entry System using FNPEG
NASA Technical Reports Server (NTRS)
Johnson, Wyatt R.; Lu, Ping; Stachowiak, Susan J.
2017-01-01
This paper discusses the implementation and simulated performance of the FNPEG (Fully Numerical Predictor-corrector Entry Guidance) algorithm into GNC FSW (Guidance, Navigation, and Control Flight Software) for use in an autonomous re-entry vehicle. A few modifications to FNPEG are discussed that result in computational savings -- a change to the state propagator, and a modification to cross-range lateral logic. Finally, some Monte Carlo results are presented using a representative vehicle in both a high-fidelity 6-DOF (degree-of-freedom) sim as well as in a 3-DOF sim for independent validation.
Post-Flight EDL Entry Guidance Performance of the 2011 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
Mendeck, Gavin F.; McGrew, Lynn Craig
2013-01-01
The 2011 Mars Science Laboratory was the first Mars guided entry which safely delivered the rover to a landing within a touchdown ellipse of 19.1 km x 6.9 km. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control the range flown. The guided entry performed as designed without any significant exceptions. The Curiosity rover was delivered about 2.2 km from the expected touchdown. This miss distance is attributed to little time to correct the downrange drift from the final bank reversal and a suspected tailwind during heading alignment. The successful guided entry for the Mars Science Laboratory lays the foundation for future Mars missions to improve upon.
Computational fluid dynamics research
NASA Technical Reports Server (NTRS)
Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott
1992-01-01
The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.
Impact of Computerized Provider Order Entry on Pharmacist Productivity
Hatfield, Mark D.; Cox, Rodney; Mhatre, Shivani K.; Flowers, W. Perry
2014-01-01
Abstract Purpose: To examine the impact of computerized provider order entry (CPOE) implementation on average time spent on medication order entry and the number of order actions processed. Methods: An observational time and motion study was conducted from March 1 to March 17, 2011. Two similar community hospital pharmacies were compared: one without CPOE implementation and the other with CPOE implementation. Pharmacists in the central pharmacy department of both hospitals were observed in blocks of 1 hour, with 24 hours of observation in each facility. Time spent by pharmacists on distributive, administrative, clinical, and miscellaneous activities associated with order entry were recorded using time and motion instrument documentation. Information on medication order actions and order entry/verifications was obtained using the pharmacy network system. Results: The mean ± SD time spent by pharmacists per hour in the CPOE pharmacy was significantly less than the non-CPOE pharmacy for distributive activities (43.37 ± 7.75 vs 48.07 ± 8.61) and significantly greater than the non-CPOE pharmacy for administrative (8.58 ± 5.59 vs 5.72 ± 6.99) and clinical (7.38 ± 4.27 vs 4.22 ± 3.26) activities. The CPOE pharmacy was associated with a significantly higher number of order actions per hour (191.00 ± 82.52 vs 111.63 ± 25.66) and significantly less time spent (in minutes per hour) on order entry and order verification combined (28.30 ± 9.25 vs 36.56 ± 9.14) than the non-CPOE pharmacy. Conclusion: The implementation of CPOE facilitated pharmacists to allocate more time to clinical and administrative functions and increased the number of order actions processed per hour, thus enhancing workflow efficiency and productivity of the pharmacy department. PMID:24958959
SAKURA-viewer: intelligent order history viewer based on two-viewpoint architecture.
Toyoda, Shuichi; Niki, Noboru; Nishitani, Hiromu
2007-03-01
We propose a new intelligent order history viewer applied to consolidating and visualizing data. SAKURA-viewer is a highly effective tool, as: 1) it visualizes both the semantic viewpoint and the temporal viewpoint of patient records simultaneously; 2) it promotes awareness of contextual information among the daily data; and 3) it implements patient-centric data entry methods. This viewer contributes to decrease the user's workload in an order entry system. This viewer is now incorporated into an order entry system being run on an experimental basis. We describe the evaluation of this system using results of a user satisfaction survey, analysis of information consolidation within the database, and analysis of the frequency of use of data entry methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-25
... Change Amending Rule 7.31(h)(5) To Reduce the Minimum Order Entry Size of a Mid-Point Passive Liquidity... order entry size of a Mid-Point Passive Liquidity Order (``MPL Order'') from 100 shares to one share...
Mars Pathfinder Atmospheric Entry Navigation Operations
NASA Technical Reports Server (NTRS)
Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.
1997-01-01
On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Computer-based physician order entry: the state of the art.
Sittig, D F; Stead, W W
1994-01-01
Direct computer-based physician order entry has been the subject of debate for over 20 years. Many sites have implemented systems successfully. Others have failed outright or flirted with disaster, incurring substantial delays, cost overruns, and threatened work actions. The rationale for physician order entry includes process improvement, support of cost-conscious decision making, clinical decision support, and optimization of physicians' time. Barriers to physician order entry result from the changes required in practice patterns, roles within the care team, teaching patterns, and institutional policies. Key ingredients for successful implementation include: the system must be fast and easy to use, the user interface must behave consistently in all situations, the institution must have broad and committed involvement and direction by clinicians prior to implementation, the top leadership of the organization must be committed to the project, and a group of problem solvers and users must meet regularly to work out procedural issues. This article reviews the peer-reviewed scientific literature to present the current state of the art of computer-based physician order entry. PMID:7719793
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
On orbital allotments for geostationary satellites
NASA Technical Reports Server (NTRS)
Gonsalvez, David J. A.; Reilly, Charles H.; Mount-Campbell, Clark A.
1986-01-01
The following satellite synthesis problem is addressed: communication satellites are to be allotted positions on the geostationary arc so that interference does not exceed a given acceptable level by enforcing conservative pairwise satellite separation. A desired location is specified for each satellite, and the objective is to minimize the sum of the deviations between the satellites' prescribed and desired locations. Two mixed integer programming models for the satellite synthesis problem are presented. Four solution strategies, branch-and-bound, Benders' decomposition, linear programming with restricted basis entry, and a switching heuristic, are used to find solutions to example synthesis problems. Computational results indicate the switching algorithm yields solutions of good quality in reasonable execution times when compared to the other solution methods. It is demonstrated that the switching algorithm can be applied to synthesis problems with the objective of minimizing the largest deviation between a prescribed location and the corresponding desired location. Furthermore, it is shown that the switching heuristic can use no conservative, location-dependent satellite separations in order to satisfy interference criteria.
Considerations for setting up an order entry system for nuclear medicine tests.
Hara, Narihiro; Onoguchi, Masahisa; Nishida, Toshihiko; Honda, Minoru; Houjou, Osamu; Yuhi, Masaru; Takayama, Teruhiko; Ueda, Jun
2007-12-01
Integrating the Healthcare Enterprise-Japan (IHE-J) was established in Japan in 2001 and has been working to standardize health information and make it accessible on the basis of the fundamental Integrating Healthcare Enterprise (IHE) specifications. However, because specialized operations are used in nuclear medicine tests, online sharing of patient information and test order information from the order entry system as shown by the scheduled workflow (SWF) is difficult, making information inconsistent throughout the facility and uniform management of patient information impossible. Therefore, we examined the basic design (subsystem design) for order entry systems, which are considered an important aspect of information management for nuclear medicine tests and needs to be consistent with the system used throughout the rest of the facility. There are many items that are required by the subsystem when setting up an order entry system for nuclear medicine tests. Among these items, those that are the most important in the order entry system are constructed using exclusion settings, because of differences in the conditions for using radiopharmaceuticals and contrast agents and appointment frame settings for differences in the imaging method and test items. To establish uniform management of patient information for nuclear medicine tests throughout the facility, it is necessary to develop an order entry system with exclusion settings and appointment frames as standard features. Thereby, integration of health information with the Radiology Information System (RIS) or Picture Archiving Communication System (PACS) based on Digital Imaging Communications in Medicine (DICOM) standards and real-time health care assistance can be attained, achieving the IHE agenda of improving health care service and efficiently sharing information.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
Event detection for car park entries by video-surveillance
NASA Astrophysics Data System (ADS)
Coquin, Didier; Tailland, Johan; Cintract, Michel
2007-10-01
Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting car park entries, using an adaptive background learning algorithm and two indicators representing activity and identity to overcome the difficulty of tracking objects.
77 FR 60917 - Trinexapac-ethyl; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... ``hog, meat by-products'' in order to correct inadvertent errors in the final rule tolerance table for...'' is revised to ``hog, meat by-products.'' V. Statutory and Executive Order Reviews This final rule... alphabetical order an entry for ``Hog, meat by-products''. 0 iii. Revising the entries for ``Wheat, forage...
Computerized provider order entry in the clinical laboratory
Baron, Jason M.; Dighe, Anand S.
2011-01-01
Clinicians have traditionally ordered laboratory tests using paper-based orders and requisitions. However, paper orders are becoming increasingly incompatible with the complexities, challenges, and resource constraints of our modern healthcare systems and are being replaced by electronic order entry systems. Electronic systems that allow direct provider input of diagnostic testing or medication orders into a computer system are known as Computerized Provider Order Entry (CPOE) systems. Adoption of laboratory CPOE systems may offer institutions many benefits, including reduced test turnaround time, improved test utilization, and better adherence to practice guidelines. In this review, we outline the functionality of various CPOE implementations, review the reported benefits, and discuss strategies for using CPOE to improve the test ordering process. Further, we discuss barriers to the implementation of CPOE systems that have prevented their more widespread adoption. PMID:21886891
Orion Capsule Handling Qualities for Atmospheric Entry
NASA Technical Reports Server (NTRS)
Tigges, Michael A.; Bihari, Brian D.; Stephens, John-Paul; Vos, Gordon A.; Bilimoria, Karl D.; Mueller, Eric R.; Law, Howard G.; Johnson, Wyatt; Bailey, Randall E.; Jackson, Bruce
2011-01-01
Two piloted simulations were conducted at NASA's Johnson Space Center using the Cooper-Harper scale to study the handling qualities of the Orion Command Module capsule during atmospheric entry flight. The simulations were conducted using high fidelity 6-DOF simulators for Lunar Return Skip Entry and International Space Station Return Direct Entry flight using bank angle steering commands generated by either the Primary (PredGuid) or Backup (PLM) guidance algorithms. For both evaluations, manual control of bank angle began after descending through Entry Interface into the atmosphere until drogue chutes deployment. Pilots were able to use defined bank management and reversal criteria to accurately track the bank angle commands, and stay within flight performance metrics of landing accuracy, g-loads, and propellant consumption, suggesting that the pilotability of Orion under manual control is both achievable and provides adequate trajectory performance with acceptable levels of pilot effort. Another significant result of these analyses is the applicability of flying a complex entry task under high speed entry flight conditions relevant to the next generation Multi Purpose Crew Vehicle return from Mars and Near Earth Objects.
GP, Douglas; RA, Deula; SE, Connor
2003-01-01
Computer-based order entry is a powerful tool for enhancing patient care. A pilot project in the pediatric department of the Lilongwe Central Hospital (LCH) in Malawi, Africa has demonstrated that computer-based order entry (COE): 1) can be successfully deployed and adopted in resource-poor settings, 2) can be built, deployed and sustained at relatively low cost and with local resources, and 3) has a greater potential to improve patient care in developing than in developed countries. PMID:14728338
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-09
... particular MPID has been made by calculating the ratio between (i) entered orders, weighted by the distance... in part. The fee has been imposed on MPIDs with an ``Order Entry Ratio'' of more than 100. The Order Entry Ratio is calculated, and the Excess Order Fee imposed, on a monthly basis. BX is now proposing to...
Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph
2009-01-01
Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796
Electronic Chemotherapy Order Entry: A Major Cancer Center's Implementation
Sklarin, Nancy T.; Granovsky, Svetlana; O'Reilly, Eileen M.; Zelenetz, Andrew D.
2011-01-01
Implementation of a computerized provider order entry system for complex chemotherapy regimens at a large cancer center required intense effort from a multidisciplinary team of clinical and systems experts with experience in all facets of the chemotherapy process. The online tools had to resemble the paper forms used at the time and parallel the successful established process as well as add new functionality. Close collaboration between the institution and the vendor was necessary. This article summarizes the institutional efforts, challenges, and collaborative processes that facilitated universal chemotherapy computerized electronic order entry across multiple sites during a period of several years. PMID:22043182
Electronic Chemotherapy Order Entry: A Major Cancer Center's Implementation.
Sklarin, Nancy T; Granovsky, Svetlana; O'Reilly, Eileen M; Zelenetz, Andrew D
2011-07-01
Implementation of a computerized provider order entry system for complex chemotherapy regimens at a large cancer center required intense effort from a multidisciplinary team of clinical and systems experts with experience in all facets of the chemotherapy process. The online tools had to resemble the paper forms used at the time and parallel the successful established process as well as add new functionality. Close collaboration between the institution and the vendor was necessary. This article summarizes the institutional efforts, challenges, and collaborative processes that facilitated universal chemotherapy computerized electronic order entry across multiple sites during a period of several years.
Hickman, Thu-Trang T; Quist, Arbor Jessica Lauren; Salazar, Alejandra; Amato, Mary G; Wright, Adam; Volk, Lynn A; Bates, David W; Schiff, Gordon
2018-04-01
Computerised prescriber order entry (CPOE) systems users often discontinue medications because the initial order was erroneous. To elucidate error types by querying prescribers about their reasons for discontinuing outpatient medication orders that they had self-identified as erroneous. During a nearly 3 year retrospective data collection period, we identified 57 972 drugs discontinued with the reason 'Error (erroneous entry)." Because chart reviews revealed limited information about these errors, we prospectively studied consecutive, discontinued erroneous orders by querying prescribers in near-real-time to learn more about the erroneous orders. From January 2014 to April 2014, we prospectively emailed prescribers about outpatient drug orders that they had discontinued due to erroneous initial order entry. Of 2 50 806 medication orders in these 4 months, 1133 (0.45%) of these were discontinued due to error. From these 1133, we emailed 542 unique prescribers to ask about their reason(s) for discontinuing these mediation orders in error. We received 312 responses (58% response rate). We categorised these responses using a previously published taxonomy. The top reasons for these discontinued erroneous orders included: medication ordered for wrong patient (27.8%, n=60); wrong drug ordered (18.5%, n=40); and duplicate order placed (14.4%, n=31). Other common discontinued erroneous orders related to drug dosage and formulation (eg, extended release versus not). Oxycodone (3%) was the most frequent drug discontinued error. Drugs are not infrequently discontinued 'in error.' Wrong patient and wrong drug errors constitute the leading types of erroneous prescriptions recognised and discontinued by prescribers. Data regarding erroneous medication entries represent an important source of intelligence about how CPOE systems are functioning and malfunctioning, providing important insights regarding areas for designing CPOE more safely in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An Automated Method to Compute Orbital Re-entry Trajectories with Heating Constraints
NASA Technical Reports Server (NTRS)
Zimmerman, Curtis; Dukeman, Greg; Hanson, John; Fogle, Frank R. (Technical Monitor)
2002-01-01
Determining how to properly manipulate the controls of a re-entering re-usable launch vehicle (RLV) so that it is able to safely return to Earth and land involves the solution of a two-point boundary value problem (TPBVP). This problem, which can be quite difficult, is traditionally solved on the ground prior to flight. If necessary, a nearly unlimited amount of time is available to find the 'best' solution using a variety of trajectory design and optimization tools. The role of entry guidance during flight is to follow the pre- determined reference solution while correcting for any errors encountered along the way. This guidance method is both highly reliable and very efficient in terms of onboard computer resources. There is a growing interest in a style of entry guidance that places the responsibility of solving the TPBVP in the actual entry guidance flight software. Here there is very limited computer time. The powerful, but finicky, mathematical tools used by trajectory designers on the ground cannot in general be converted to do the job. Non-convergence or slow convergence can result in disaster. The challenges of designing such an algorithm are numerous and difficult. Yet the payoff (in the form of decreased operational costs and increased safety) can be substantiaL This paper presents an algorithm that incorporates features of both types of guidance strategies. It takes an initial RLV orbital re-entry state and finds a trajectory that will safely transport the vehicle to Earth. During actual flight, the computed trajectory is used as the reference to be flown by a more traditional guidance method.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... closed to settlement, sale, location, or entry under the general land laws, including the United States mining laws, until the Bureau of Land Management completes a planning review. Order By virtue of the... withdrew lands from settlement, sale location, or entry under the general land laws, including the United...
Chu, Chia-Hui; Kuo, Ming-Chuan; Weng, Shu-Hui; Lee, Ting-Ting
2016-01-01
A user friendly interface can enhance the efficiency of data entry, which is crucial for building a complete database. In this study, two user interfaces (traditional pull-down menu vs. check boxes) are proposed and evaluated based on medical records with fever medication orders by measuring the time for data entry, steps for each data entry record, and the complete rate of each medical record. The result revealed that the time for data entry is reduced from 22.8 sec/record to 3.2 sec/record. The data entry procedures also have reduced from 9 steps in the traditional one to 3 steps in the new one. In addition, the completeness of medical records is increased from 20.2% to 98%. All these results indicate that the new user interface provides a more user friendly and efficient approach for data entry than the traditional interface.
MSL EDL Entry Guidance using the Entry Terminal Point Controller
NASA Technical Reports Server (NTRS)
2006-01-01
The Mars Science Laboratory will be the first Mars mission to attempt a guided entry with the objective of safely delivering the entry vehicle to a survivable parachute deploy state within 10 km of the pre-designated landing site. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control range based on deviations in range, altitude rate, and drag acceleration from a reference trajectory. For application to Mars landers which must make use of the tenuous Martian atmosphere, it is critical to balance the lift of the vehicle to minimize the range while still ensuring a safe deploy altitude. An overview of the process to generate optimized guidance settings is presented, discussing improvements made over the last four years. Performance tradeoffs between ellipse size and deploy altitude will be presented, along with imposed constraints of entry acceleration and heating. Performance sensitivities to the bank reversal deadbands, heading alignment, attitude initialization error, and atmospheric delivery errors are presented. Guidance settings for contingency operations, such as those appropriate for severe dust storm scenarios, are evaluated.
CNES-NASA Studies of the Mars Sample Return Orbiter Aerocapture Phase
NASA Technical Reports Server (NTRS)
Fraysse, H.; Powell, R.; Rousseau, S.; Striepe, S.
2000-01-01
A Mars Sample Return (MSR) mission has been proposed as a joint CNES (Centre National d'Etudes Spatiales) and NASA effort in the ongoing Mars Exploration Program. The MSR mission is designed to return the first samples of Martian soil to Earth. The primary elements of the mission are a lander, rover, ascent vehicle, orbiter, and an Earth entry vehicle. The Orbiter has been allocated only 2700 kg on the launch phase to perform its part of the mission. This mass restriction has led to the decision to use an aerocapture maneuver at Mars for the orbiter. Aerocapture replaces the initial propulsive capture maneuver with a single atmospheric pass. This atmospheric pass will result in the proper apoapsis, but a periapsis raise maneuver is required at the first apoapsis. The use of aerocapture reduces the total mass requirement by approx. 45% for the same payload. This mission will be the first to use the aerocapture technique. Because the spacecraft is flying through the atmosphere, guidance algorithms must be developed that will autonomously provide the proper commands to reach the desired orbit while not violating any of the design parameters (e.g. maximum deceleration, maximum heating rate, etc.). The guidance algorithm must be robust enough to account for uncertainties in delivery states, atmospheric conditions, mass properties, control system performance, and aerodynamics. To study this very critical phase of the mission, a joint CNES-NASA technical working group has been formed. This group is composed of atmospheric trajectory specialists from CNES, NASA Langley Research Center and NASA Johnson Space Center. This working group is tasked with developing and testing guidance algorithms, as well as cross-validating CNES and NASA flight simulators for the Mars atmospheric entry phase of this mission. The final result will be a recommendation to CNES on the algorithm to use, and an evaluation of the flight risks associated with the algorithm. This paper will describe the aerocapture phase of the MSR mission, the main principles of the guidance algorithms that are under development, the atmospheric entry simulators developed for the evaluations, the process for the evaluations, and preliminary results from the evaluations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Servicing book-entry Postal Service securities... POSTAL SERVICE POSTAL SERVICE DEBT OBLIGATIONS; DISBURSEMENT POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.8 Servicing book-entry Postal Service securities; payment of interest, payment at maturity or upon...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Servicing book-entry Postal Service securities... POSTAL SERVICE POSTAL SERVICE DEBT OBLIGATIONS; DISBURSEMENT POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.8 Servicing book-entry Postal Service securities; payment of interest, payment at maturity or upon...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 39 Postal Service 1 2014-07-01 2014-07-01 false Servicing book-entry Postal Service securities... POSTAL SERVICE POSTAL SERVICE DEBT OBLIGATIONS; DISBURSEMENT POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.8 Servicing book-entry Postal Service securities; payment of interest, payment at maturity or upon...
Research on the water-entry attitude of a submersible aircraft.
Xu, BaoWei; Li, YongLi; Feng, JinFu; Hu, JunHua; Qi, Duo; Yang, Jian
2016-01-01
The water entry of a submersible aircraft, which is transient, highly coupled, and nonlinear, is complicated. After analyzing the mechanics of this process, the change rate of every variable is considered. A dynamic model is build and employed to study vehicle attitude and overturn phenomenon during water entry. Experiments are carried out and a method to organize experiment data is proposed. The accuracy of the method is confirmed by comparing the results of simulation of dynamic model and experiment under the same condition. Based on the analysis of the experiment and simulation, the initial attack angle and angular velocity largely influence the water entry of vehicle. Simulations of water entry with different initial and angular velocities are completed, followed by an analysis, and the motion law of vehicle is obtained. To solve the problem of vehicle stability and control during water entry, an approach is proposed by which the vehicle sails with a zero attack angle after entering water by controlling the initial angular velocity. With the dynamic model and optimization research algorithm, calculation is performed, and the optimal initial angular velocity of water-entry is obtained. The outcome of simulations confirms that the effectiveness of the propose approach by which the initial water-entry angular velocity is controlled.
Ahmad, Asif; Teater, Phyllis; Bentley, Thomas D.; Kuehn, Lynn; Kumar, Rajee R.; Thomas, Andrew; Mekhjian, Hagop S.
2002-01-01
The benefits of computerized physician order entry have been widely recognized, although few institutions have successfully installed these systems. Obstacles to successful implementation are organizational as well as technical. In the spring of 2000, following a 4-year period of planning and customization, a 9-month pilot project, and a 14-month hiatus for year 2000, the Ohio State University Health System extensively implemented physician order entry across inpatient units. Implementation for specialty and community services is targeted for completion in 2002. On implemented units, all orders are processed through the system, with 80 percent being entered by physicians and the rest by nursing or other licensed care providers. The system is deployable across diverse clinical environments, focused on physicians as the primary users, and accepted by clinicians. These are the three criteria by which the authors measured the success of their implementation. They believe that the availability of specialty-specific order sets, the engagement of physician leadership, and a large-scale system implementation were key strategic factors that enabled physician-users to accept a physician order entry system despite significant changes in workflow. PMID:11751800
Algorithms for Port-of-Entry Inspection
2007-05-29
Devdatt Lad, Rutgers University, Center for Advanced Information Processing Mingyu Li, Rutgers University, Statistics Francesco Longo, University of...Industrial and Systems Engineering graduate student Devdatt Lad, Rutgers University, Electrical & Computer Engineering, graduate student Mingyu Li
Fast human pose estimation using 3D Zernike descriptors
NASA Astrophysics Data System (ADS)
Berjón, Daniel; Morán, Francisco
2012-03-01
Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.
Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang
2010-01-01
A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746
Harper, Marvin B; Longhurst, Christopher A; McGuire, Troy L; Tarrago, Rod; Desai, Bimal R; Patterson, Al
2014-03-01
The study aims to develop a core set of pediatric drug-drug interaction (DDI) pairs for which electronic alerts should be presented to prescribers during the ordering process. A clinical decision support working group composed of Children's Hospital Association (CHA) members was developed. CHA Pharmacists and Chief Medical Information Officers participated. Consensus was reached on a core set of 19 DDI pairs that should be presented to pediatric prescribers during the order process. We have provided a core list of 19 high value drug pairs for electronic drug-drug interaction alerts to be recommended for inclusion as high value alerts in prescriber order entry software used with a pediatric patient population. We believe this list represents the most important pediatric drug interactions for practical implementation within computerized prescriber order entry systems.
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Orion Entry Display Feeder and Interactions with the Entry Monitor System
NASA Technical Reports Server (NTRS)
Baird, Darren; Bernatovich, Mike; Gillespie, Ellen; Kadwa, Binaifer; Matthews, Dave; Penny, Wes; Zak, Tim; Grant, Mike; Bihari, Brian
2010-01-01
The Orion spacecraft is designed to return astronauts to a landing within 10 km of the intended landing target from low Earth orbit, lunar direct-entry, and lunar skip-entry trajectories. Al pile the landing is nominally controlled autonomously, the crew can fly precision entries manually in the event of an anomaly. The onboard entry displays will be used by the crew to monitor and manually fly the entry, descent, and landing, while the Entry Monitor System (EMS) will be used to monitor the health and status of the onboard guidance and the trajectory. The entry displays are driven by the entry display feeder, part of the Entry Monitor System (EMS). The entry re-targeting module, also part of the EMS, provides all the data required to generate the capability footprint of the vehicle at any point in the trajectory, which is shown on the Primary Flight Display (PFD). It also provides caution and warning data and recommends the safest possible re-designated landing site when the nominal landing site is no longer within the capability of the vehicle. The PFD and the EMS allow the crew to manually fly an entry trajectory profile from entry interface until parachute deploy having the flexibility to manually steer the vehicle to a selected landing site that best satisfies the priorities of the crew. The entry display feeder provides data from the ENIS and other components of the GNC flight software to the displays at the proper rate and in the proper units. It also performs calculations that are specific to the entry displays and which are not made in any other component of the flight software. In some instances, it performs calculations identical to those performed by the onboard primary guidance algorithm to protect against a guidance system failure. These functions and the interactions between the entry display feeder and the other components of the EMS are described.
39 CFR 761.3 - Scope and effect of book-entry procedure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Scope and effect of book-entry procedure. 761.3... POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.3 Scope and effect of book-entry procedure. (a) A Reserve Bank as fiscal agent of the United States acting on behalf of the Postal Service may apply the book...
39 CFR 761.3 - Scope and effect of book-entry procedure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 39 Postal Service 1 2013-07-01 2013-07-01 false Scope and effect of book-entry procedure. 761.3... POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.3 Scope and effect of book-entry procedure. (a) A Reserve Bank as fiscal agent of the United States acting on behalf of the Postal Service may apply the book...
39 CFR 761.3 - Scope and effect of book-entry procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 39 Postal Service 1 2014-07-01 2014-07-01 false Scope and effect of book-entry procedure. 761.3... POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.3 Scope and effect of book-entry procedure. (a) A Reserve Bank as fiscal agent of the United States acting on behalf of the Postal Service may apply the book...
39 CFR 761.3 - Scope and effect of book-entry procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Scope and effect of book-entry procedure. 761.3... POSTAL MONEY ORDERS BOOK-ENTRY PROCEDURES § 761.3 Scope and effect of book-entry procedure. (a) A Reserve Bank as fiscal agent of the United States acting on behalf of the Postal Service may apply the book...
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Clark, Alex M; Williams, Antony J; Ekins, Sean
2015-01-01
The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats. Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms.
An Automated Method to Compute Orbital Re-Entry Trajectories with Heating Constraints
NASA Technical Reports Server (NTRS)
Zimmerman, Curtis; Dukeman, Greg; Hanson, John; Fogle, Frank R. (Technical Monitor)
2002-01-01
Determining how to properly manipulate the controls of a re-entering re-usable launch vehicle (RLV) so that it is able to safely return to Earth and land involves the solution of a two-point boundary value problem (TPBVP). This problem, which can be quite difficult, is traditionally solved on the ground prior to flight. If necessary, a nearly unlimited amount of time is available to find the "best" solution using a variety of trajectory design and optimization tools. The role of entry guidance during flight is to follow the pre-determined reference solution while correcting for any errors encountered along the way. This guidance method is both highly reliable and very efficient in terms of onboard computer resources. There is a growing interest in a style of entry guidance that places the responsibility of solving the TPBVP in the actual entry guidance flight software. Here there is very limited computer time. The powerful, but finicky, mathematical tools used by trajectory designers on the ground cannot in general be made to do the job. Nonconvergence or slow convergence can result in disaster. The challenges of designing such an algorithm are numerous and difficult. Yet the payoff (in the form of decreased operational costs and increased safety) can be substantial. This paper presents an algorithm that incorporates features of both types of guidance strategies. It takes an initial RLV orbital re-entry state and finds a trajectory that will safely transport the vehicle to a Terminal Area Energy Management (TAEM) region. During actual flight, the computed trajectory is used as the reference to be flown by a more traditional guidance method.
Streaming PCA with many missing entries.
DOT National Transportation Integrated Search
2015-12-01
This paper considers the problem of matrix completion when some number of the columns are : completely and arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard : algorithms for matrix completion can return arbit...
Computerized N-acetylcysteine physician order entry by template protocol for acetaminophen toxicity.
Thompson, Trevonne M; Lu, Jenny J; Blackwood, Louisa; Leikin, Jerrold B
2011-01-01
Some medication dosing protocols are logistically complex for traditional physician ordering. The use of computerized physician order entry (CPOE) with templates, or order sets, may be useful to reduce medication administration errors. This study evaluated the rate of medication administration errors using CPOE order sets for N-acetylcysteine (NAC) use in treating acetaminophen poisoning. An 18-month retrospective review of computerized inpatient pharmacy records for NAC use was performed. All patients who received NAC for the treatment of acetaminophen poisoning were included. Each record was analyzed to determine the form of NAC given and whether an administration error occurred. In the 82 cases of acetaminophen poisoning in which NAC was given, no medication administration errors were identified. Oral NAC was given in 31 (38%) cases; intravenous NAC was given in 51 (62%) cases. In this retrospective analysis of N-acetylcysteine administration using computerized physician order entry and order sets, no medication administration errors occurred. CPOE is an effective tool in safely executing complicated protocols in an inpatient setting.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Computerized physician order entry from a chief information officer perspective.
Cotter, Carole M
2004-12-01
Designing and implementing a computerized physician order entry system in the critical care units of a large urban hospital system is an enormous undertaking. With their significant potential to improve health care and significantly reduce errors, the time for computerized physician order entry or physician order management systems is past due. Careful integrated planning is the key to success, requiring multidisciplinary teams at all levels of clinical and administrative management to work together. Articulated from the viewpoint of the Chief Information Officer of Lifespan, a not-for-profit hospital system in Rhode Island, the vision and strategy preceding the information technology plan, understanding the system's current state, the gap analysis between current and future state, and finally, building and implementing the information technology plan are described.
A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.
Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei
2015-11-01
A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.
Chen, Jeannie; Shabot, M. Michael; LoBue, Mark
2003-01-01
Prior attempts to interface ICU Clinical Information Systems (CIS) to Pharmacy systems have been less than successful. The major problem is that in ICUs, medications frequently have to be administered and charted in the CIS Medication Administration Record (MAR) before pharmacists can enter them into the Pharmacy system. When the Pharmacy system belatedly sends medication orders to the CIS MAR, this may create duplicate entries for medications that ICU nurses have had to enter manually to chart doses actually given. The authors have implemented a real time interface between a Computerized Physician Order Entry (CPOE) system and a CIS operating in ten ICUs that solves this problem. The interface transfers new medication orders including order details and alerts directly to the CIS Medication Administration Record (MAR), where they are immediately available for nurse charting. PMID:14728315
Georgiou, Andrew; Prgomet, Mirela; Paoloni, Richard; Creswick, Nerida; Hordern, Antonia; Walter, Scott; Westbrook, Johanna
2013-06-01
We undertake a systematic review of the quantitative literature related to the effect of computerized provider order entry systems in the emergency department (ED). We searched MEDLINE, EMBASE, Inspec, CINAHL, and CPOE.org for English-language studies published between January 1990 and May 2011. We identified 1,063 articles, of which 22 met our inclusion criteria. Sixteen used a pre/post design; 2 were randomized controlled trials. Twelve studies reported outcomes related to patient flow/clinical work, 7 examined decision support systems, and 6 reported effects on patient safety. There were no studies that measured decision support systems and its effect on patient flow/clinical work. Computerized provider order entry was associated with an increase in time spent on computers (up to 16.2% for nurses and 11.3% for physicians), with no significant change in time spent on patient care. Computerized provider order entry with decision support systems was related to significant decreases in prescribing errors (ranging from 17 to 201 errors per 100 orders), potential adverse drug events (0.9 per 100 orders), and prescribing of excessive dosages (31% decrease for a targeted set of renal disease medications). There are tangible benefits associated with computerized provider order entry/decision support systems in the ED environment. Nevertheless, when considered as part of a framework of technical, clinical, and organizational components of the ED, the evidence base is neither consistent nor comprehensive. Multimethod research approaches (including qualitative research) can contribute to understanding of the multiple dimensions of ED care delivery, not as separate entities but as essential components of a highly integrated system of care. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Discrimination of bullet types using analysis of lead isotopes deposited in gunshot entry wounds.
Wunnapuk, Klintean; Minami, Takeshi; Durongkadech, Piya; Tohno, Setsuko; Ruangyuttikarn, Werawan; Moriwake, Yumi; Vichairat, Karnda; Sribanditmongkol, Pongruk; Tohno, Yoshiyuki
2009-01-01
In order to discriminate bullet types used in firearms, of which the victims died, the authors investigated lead isotope ratios in gunshot entry wounds from nine lead (unjacketed) bullets, 15 semi-jacketed bullets, and 14 full-jacketed bullets by inductively coupled plasma-mass spectrometry. It was found that the lead isotope ratio of 207/206 in gunshot entry wounds was the highest with lead bullets, and it decreased in order from full-jacketed to semi-jacketed bullets. Lead isotope ratios of 208/206 or 208/207 to 207/206 at the gunshot entry wound were able to discriminate semi-jacketed bullets from lead and full-jacketed ones, but it was difficult to discriminate between lead and full-jacketed bullets. However, a combination of element and lead isotope ratio analyses in gunshot entry wounds enabled discrimination between lead, semi-jacketed, and full-jacketed bullets.
Langhoff, R; Stumpe, S; Treitl, M; Schulte, K L
2013-10-01
The management of progressive peripheral artery disease experienced a vast change in paradigms over the last decades for the benefit of minimal invasive therapy as a first-line strategy. With the constant development of new devices, materials and dedicated access strategies, more complex lesions can be managed but the limitations to successfully treat chronic total occlusions are still the challenge to re-enter the true lumen. The aim of this retrospective study was to investigate, if a "wire only" strategy leads to an acceptable success rate in a mixed cohort of CTO lesions and to what extend re-entry devices are used. We retrospectively analyzed patients treated at the Vascular Center Berlin between 2011 and 2013 with chronic total occlusion out of a prospective conducted database (Endovascular MILestones - EMIL) for demographics, risk factors, co-morbidities, technical success rates, lesion characteristics and use of guidewires as well as re-entry systems. A total of 128 patients with 146 lesions, which represent a subgroup of all the cases performed in our center, following a predefined treatment algorithm for chronic total occlusions (CTOs), have been analyzed. We achieved a technical success in 133 (91.1%) of all cases following a "wire only" strategy. Out of 13 (8.9%) CTOs with technical failure in 7 (53.9%) CTOs a re-entry device (Off-Road®) with a 100% technical success has been used. In 91.1% of chronic total occlusion lesions the use of 2 wires only (88.7%) led to a successful recanalization. A "wire only" strategy followed by the use of a re-entry device as a bail out strategy, led to a total of 140 (96%) lesions to be successfully recanalized. In more than 90% of all cases with chronic total occlusion of peripheral lower extremity arteries, endovascular intervention has been successful following a "wire only" strategy. When deciding to use a re-entry device, in case of a failure of a proper wire re-entry at the reconstitution point, a technical success rate of 100% was achieved. Therefore following a strict wire algorithm and considering the use of a re-entry system as a bail out strategy will lead to a successful minimal invasive management of chronic total occlusion in nearly 100% of the cases with TASC II A - D lesions.
Atmospheric Entry Heating of Micrometeorites Revisited: Higher Temperatures and Potential Biases
NASA Technical Reports Server (NTRS)
Love, S.; Alexander, C. M. OD.
2001-01-01
The atmospheric entry heating model of Love and Brownlee appears to have overestimated evaporation rates by as much as two orders of magnitude. Here we revisit the issue of atmospheric entry heating, using a revised prescription for evaporation rates. Additional information is contained in the original extended abstract.
Code of Federal Regulations, 2013 CFR
2013-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Procedures § 2521.5 Annual proof. (a) Showing required. (1) In order to test the sincerity and good faith of claimants under the desert... a desert-land entry unless made on account of that particular entry, and expenditures once credited...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Procedures § 2521.5 Annual proof. (a) Showing required. (1) In order to test the sincerity and good faith of claimants under the desert... a desert-land entry unless made on account of that particular entry, and expenditures once credited...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF THE INTERIOR LAND RESOURCE MANAGEMENT (2000) DESERT-LAND ENTRIES Procedures § 2521.5 Annual proof. (a) Showing required. (1) In order to test the sincerity and good faith of claimants under the desert... a desert-land entry unless made on account of that particular entry, and expenditures once credited...
Westbrook, J I; Georgiou, A; Dimos, A; Germanos, T
2006-01-01
Objective To assess the impact of a computerised pathology order entry system on laboratory turnaround times and test ordering within a teaching hospital. Methods A controlled before and after study compared test assays ordered from 11 wards two months before (n = 97 851) and after (n = 113 762) the implementation of a computerised pathology order entry system (Cerner Millennium Powerchart). Comparisons were made of laboratory turnaround times, frequency of tests ordered and specimens taken, proportions of patients having tests, average number per patient, and percentage of gentamicin and vancomycin specimens labelled as random. Results Intervention wards experienced an average decrease in turnaround of 15.5 minutes/test assay (range 73.8 to 58.3 minutes; p<0.001). Reductions were significant for prioritised and non‐prioritised tests, and for those done within and outside business hours. There was no significant change in the average number of tests (p = 0.228), or specimens per patient (p = 0.324), and no change in turnaround time for the control ward (p = 0.218). Use of structured order screens enhanced data provided to laboratories. Removing three test assays from the liver function order set resulted in significantly fewer of these tests being done. Conclusions Computerised order entry systems are an important element in achieving faster test results. These systems can influence test ordering patterns through structured order screens, manipulation of order sets, and analysis of real time data to assess the impact of such changes, not possible with paper based systems. The extent to which improvements translate into improved patient outcomes remains to be determined. A potentially limiting factor is clinicians' capacity to respond to, and make use of, faster test results. PMID:16461564
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Roth, Dan; Wilkins, David C.
2001-01-01
Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function - i.e. a ratio of two polynomials - of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGSs performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.
Optimal reentry prediction of space objects from LEO using RSM and GA
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
2012-07-01
The accurate estimation of the orbital life time (OLT) of decaying near-Earth objects is of considerable importance for the prediction of risk object re-entry time and hazard assessment as well as for mitigation strategies. Recently, due to the reentries of large number of risk objects, which poses threat to the human life and property, a great concern is developed in the space scientific community all over the World. The evolution of objects in Low Earth Orbit (LEO) is determined by a complex interplay of the perturbing forces, mainly due to atmospheric drag and Earth gravity. These orbits are mostly in low eccentric (eccentricity < 0.2) and have variations in perigee and apogee altitudes due to perturbations during a revolution. The changes in the perigee and apogee altitudes of these orbits are mainly due to the gravitational perturbations of the Earth and the atmospheric density. It has become necessary to use extremely complex force models to match with the present operational requirements and observational techniques. Further the re-entry time of the objects in such orbits is sensitive to the initial conditions. In this paper the problem of predicting re-entry time is attempted as an optimal estimation problem. It is known that the errors are more in eccentricity for the observations based on two line elements (TLEs). Thus two parameters, initial eccentricity and ballistic coefficient, are chosen for optimal estimation. These two parameters are computed with response surface method (RSM) using a genetic algorithm (GA) for the selected time zones, based on rough linear variation of response parameter, the mean semi-major axis during orbit evolution. Error minimization between the observed and predicted mean Semi-major axis is achieved by the application of an optimization algorithm such as Genetic Algorithm (GA). The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and eccentricity. The methodology is tested with the recently reentered objects ROSAT and PHOBOS GRUNT satellites. The study reveals a good agreement with the actual reentry time of these objects. It is also observed that the absolute percentage error in re-entry prediction time for all the two objects is found to be very less. Keywords: low eccentric, Response surface method, Genetic algorithm, apogee altitude, Ballistic coefficient
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-10
... of the PULSe order entry workstation and to make a technical correction to the numbering of the text in the fees schedule. The PULSe workstation is a front-end order entry system designed for use with...\\ In conjunction with the launch of the PULSe workstation, the Exchange waived various fees. To...
ERIC Educational Resources Information Center
Wang, Liya
2016-01-01
This study examined the association between Computerized Physician Order Entry (CPOE) application and healthcare quality in pediatric patients at hospital level. This was a retrospective study among 1,428 hospitals with pediatric setting in Healthcare Cost and Utilization Project (HCUP) Kid's Inpatient Database (KID) and Health Information and…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
... entry, cancellation of such orders and the calculation and publication of imbalances. In particular... a Mandatory MOC/LOC Imbalance Publication. The rule therefore suggests that members or member... all MOC/LOC orders that would join the same side of a published MOC/LOC imbalance and the entry of MOC...
19 CFR 145.12 - Entry of merchandise.
Code of Federal Regulations, 2011 CFR
2011-04-01
... formal entry, even though they reach Customs at the same time and are covered by a single order or contract in excess of $2,000, unless there was a splitting of shipments in order to avoid the payment of... Postal Service for delivery and collection of duty. If the addressee has arranged to pick up such a...
19 CFR 145.12 - Entry of merchandise.
Code of Federal Regulations, 2010 CFR
2010-04-01
... formal entry, even though they reach Customs at the same time and are covered by a single order or contract in excess of $2,000, unless there was a splitting of shipments in order to avoid the payment of... Postal Service for delivery and collection of duty. If the addressee has arranged to pick up such a...
19 CFR 148.77 - Entry of effects on termination of assignment to extended duty, or on evacuation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... unaccompanied personal and household effects by either a United States Dispatch Agent or a designated... entry if there is a valid reason evident from the owner's travel orders or information at hand why the... of Government employee) Travel orders and information on hand in this office show that the named...
76 FR 38293 - Risk Management Controls for Brokers or Dealers With Market Access
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... securities to give broker- dealers with market access additional time to develop, test, and implement the... that exceed appropriate pre-set credit or capital thresholds,\\5\\ or that appear to be erroneous.\\6\\ The... satisfied on a pre-order entry basis,\\7\\ prevent the entry of orders that the broker- dealers or customer is...
Simulation and analysis of a proposed replacement for the McCook port of entry inspection station
DOT National Transportation Integrated Search
1999-04-01
This report describes a study of a proposed replacement for the McCook Port of Entry inspection station at the entry to South Dakota. In order to assess the potential for a low-speed weigh in motion (WIM) scale within the station to pre-screen trucks...
Systolic array processing of the sequential decoding algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Yao, K.
1989-01-01
A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.
Cognitive analysis of physicians' medication ordering activity.
Pelayo, Sylvia; Leroy, Nicolas; Guerlinger, Sandra; Degoulet, Patrice; Meaux, Jean-Jacques; Beuscart-Zéphir, Marie-Catherine
2005-01-01
Computerized Physician Order Entry (CPOE) addresses critical functions in healthcare systems. As the name clearly indicates, these systems focus on order entry. With regard to medication orders, such systems generally force physicians to enter exhaustively documented orders. But a cognitive analysis of the physician's medication ordering task shows that order entry is the last (and least) important step of the entire cognitive therapeutic decision making task. We performed a comparative analysis of these complex cognitive tasks in two working environments, computer-based and paper-based. The results showed that information gathering, selection and interpretation are critical cognitive functions to support the therapeutic decision making. Thus the most important requirement from the physician's perspective would be an efficient display of relevant information provided first in the form of a summarized view of the patient's current treatment, followed by in a more detailed focused display of those items pertinent to the current situation. The CPOE system examined obviously failed to provide the physicians this critical summarized view. Following these results, consistent with users' complaints, the Company decided to engage in a significant re-engineering process of their application.
EntrySat: A 3U CubeStat to study the reentry atmospheric environment
NASA Astrophysics Data System (ADS)
Anthony, Sournac; Raphael, Garcia; David, Mimoun; Jeremie, Chaix
2016-04-01
ISAE France Entrysat has for main scientific objective the study of uncontrolled atmospheric re-entry. This project, is developed by ISAE in collaboration with ONERA and University of Toulouse, is funded by CNES, in the overall frame of the QB50 project. This nano-satellite is a 3U Cubesat measuring 34*10*10 cm3, similar to secondary debris produced during the break up of a spacecraft. EntrySat will collect the external and internal temperatures, pressure, heat flux, attitude variations and drag force of the satellite between ≈150 and 90 km before its destruction in the atmosphere, and transmit them during the re-entry using the IRIDIUM satellite network. The result will be compared with the computations of MUSIC/FAST, a new 6-degree of freedom code developed by ONERA to predict the trajectory of space debris. In order to fulfil the scientific objectives, the satellite will acquire 18 re-entry sensors signals, convert them and compress them, thanks to an electronic board developed by ISAE students in cooperation with EREMS. In order to transmit these data every second during the re-entry phase, the satellite will use an IRIDIUM connection. In order to keep a stable enough attitudes during this phase, a simple attitude orbit and control system using magnetotorquers and an inertial measurement unit (IMU) is developed at ISAE by students. A commercial GPS board is also integrated in the satellite into Entry Sat to determine its position and velocity which are necessary during the re-entry phase. This GPS will also be used to synchronize the on-board clock with the real-time UTC data. During the orbital phase (≈2 year) EntrySat measurements will be recorded transmitted through a more classical "UHF/VHF" connection. Preference for presentation: Poster Most suitable session: Author for correspondence: Dr Raphael F. Garcia ISAE 10, ave E. Belin, 31400 Toulouse, France Raphael.GARCIA@isae.fr +33 5 61 33 81 14
Unstructured Mesh Methods for the Simulation of Hypersonic Flows
NASA Technical Reports Server (NTRS)
Peraire, Jaime; Bibb, K. L. (Technical Monitor)
2001-01-01
This report describes the research work undertaken at the Massachusetts Institute of Technology. The aim of this research is to identify effective algorithms and methodologies for the efficient and routine solution of hypersonic viscous flows about re-entry vehicles. For over ten years we have received support from NASA to develop unstructured mesh methods for Computational Fluid Dynamics. As a result of this effort a methodology based on the use, of unstructured adapted meshes of tetrahedra and finite volume flow solvers has been developed. A number of gridding algorithms flow solvers, and adaptive strategies have been proposed. The most successful algorithms developed from the basis of the unstructured mesh system FELISA. The FELISA system has been extensively for the analysis of transonic and hypersonic flows about complete vehicle configurations. The system is highly automatic and allows for the routine aerodynamic analysis of complex configurations starting from CAD data. The code has been parallelized and utilizes efficient solution algorithms. For hypersonic flows, a version of the, code which incorporates real gas effects, has been produced. One of the latest developments before the start of this grant was to extend the system to include viscous effects. This required the development of viscous generators, capable of generating the anisotropic grids required to represent boundary layers, and viscous flow solvers. In figures I and 2, we show some sample hypersonic viscous computations using the developed viscous generators and solvers. Although these initial results were encouraging, it became apparent that in order to develop a fully functional capability for viscous flows, several advances in gridding, solution accuracy, robustness and efficiency were required. As part of this research we have developed: 1) automatic meshing techniques and the corresponding computer codes have been delivered to NASA and implemented into the GridEx system, 2) a finite element algorithm for the solution of the viscous compressible flow equations which can solve flows all the way down to the incompressible limit and that can use higher order (quadratic) approximations leading to highly accurate answers, and 3) and iterative algebraic multigrid solution techniques.
Physician Utilization of a Hospital Information System: A Computer Simulation Model
Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.
1988-01-01
The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.
Progress in Guidance and Control Research for Space Access and Hypersonic Vehicles (Preprint)
2006-09-01
affect range capabilities. In 2003 an integrated adaptive guidance control and trajectory re- shaping algorithm was flight demonstrated using in-flight...21] which tied for the best scores as well as a Linear Quadratic Regulator[22], Predictor - Corrector [23], and Shuttle-like entry[24] guidance method...Accurate knowledge of mass, center- of-gravity and moments of inertia improves the perfor- mance of not only IAG& C algorithms but also model based
Orion Exploration Mission Entry Interface Target Line
NASA Technical Reports Server (NTRS)
Rea, Jeremy R.
2016-01-01
The Orion Multi-Purpose Crew Vehicle is required to return to the continental United States at any time during the month. In addition, it is required to provide a survivable entry from a wide range of trans-lunar abort trajectories. The Entry Interface (EI) state must be targeted to ensure that all requirements are met for all possible return scenarios, even in the event of no communication with the Mission Control Center to provide an updated EI target. The challenge then is to functionalize an EI state constraint manifold that can be used in the on-board targeting algorithm, as well as the ground-based trajectory optimization programs. This paper presents the techniques used to define the EI constraint manifold and to functionalize it as a set of polynomials in several dimensions.
Zuckerberg, Gabriel S; Scott, Andrew V; Wasey, Jack O; Wick, Elizabeth C; Pawlik, Timothy M; Ness, Paul M; Patel, Nishant D; Resar, Linda M S; Frank, Steven M
2015-07-01
Two necessary components of a patient blood management program are education regarding evidence-based transfusion guidelines and computerized provider order entry (CPOE) with clinician decision support (CDS). This study examines changes in red blood cell (RBC) utilization associated with each of these two interventions. We reviewed 5 years of blood utilization data (2009-2013) for 70,118 surgical patients from 10 different specialty services at a tertiary care academic medical center. Three distinct periods were compared: 1) before blood management, 2) education alone, and 3) education plus CPOE. Changes in RBC unit utilization were assessed over the three periods stratified by surgical service. Cost savings were estimated based on RBC acquisition costs. For all surgical services combined, RBC utilization decreased by 16.4% with education alone (p = 0.001) and then changed very little (2.5% increase) after subsequent addition of CPOE (p = 0.64). When we compared the period of education plus CPOE to the pre-blood management period, the overall decrease was 14.3% (p = 0.008; 2102 fewer RBC units/year, or a cost avoidance of $462,440/year). Services with the highest massive transfusion rates (≥10 RBC units) exhibited the least reduction in RBC utilization. Adding CPOE with CDS after a successful education effort to promote evidence-based transfusion practice did not further reduce RBC utilization. These findings suggest that education is an important and effective component of a patient blood management program and that CPOE algorithms may serve to maintain compliance with evidence-based transfusion guidelines. © 2015 AABB.
PEPlife: A Repository of the Half-life of Peptides
NASA Astrophysics Data System (ADS)
Mathur, Deepika; Prakash, Satya; Anand, Priya; Kaur, Harpreet; Agrawal, Piyush; Mehta, Ayesha; Kumar, Rajesh; Singh, Sandeep; Raghava, Gajendra P. S.
2016-11-01
Short half-life is one of the key challenges in the field of therapeutic peptides. Various studies have reported enhancement in the stability of peptides using methods like chemical modifications, D-amino acid substitution, cyclization, replacement of labile aminos acids, etc. In order to study this scattered data, there is a pressing need for a repository dedicated to the half-life of peptides. To fill this lacuna, we have developed PEPlife (http://crdd.osdd.net/raghava/peplife), a manually curated resource of experimentally determined half-life of peptides. PEPlife contains 2229 entries covering 1193 unique peptides. Each entry provides detailed information of the peptide, like its name, sequence, half-life, modifications, the experimental assay for determining half-life, biological nature and activity of the peptide. We also maintain SMILES and structures of peptides. We have incorporated web-based modules to offer user-friendly data searching and browsing in the database. PEPlife integrates numerous tools to perform various types of analysis such as BLAST, Smith-Waterman algorithm, GGSEARCH, Jalview and MUSTANG. PEPlife would augment the understanding of different factors that affect the half-life of peptides like modifications, sequence, length, route of delivery of the peptide, etc. We anticipate that PEPlife will be useful for the researchers working in the area of peptide-based therapeutics.
Mid-Term Assessment of English 10 Students: A Comparison of Methods of Entry into the Course.
ERIC Educational Resources Information Center
Isonio, Steven
In spring 1992, a mid-term assessment of English 10 students was conducted at Golden West College, in California, in order to compare four course placement methods. English 10, "Writing Essentials," is a nontransferrable course which focuses on paragraph writing and grammar review in order to prepare students for entry into English 100.…
Code of Federal Regulations, 2010 CFR
2010-04-01
... Training Administration. (a) The Administrator shall promptly notify the DHS and ETA of the entry of a... part, unless the Administrator notifies the DHS and ETA of the entry of a subsequent order lifting the... the cease and desist order, without having on file with ETA an attestation pursuant to § 655.520 of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... Order 13382 Related to the Islamic Republic of Iran Shipping Lines (IRISL) AGENCY: Office of Foreign... connection to the Islamic Republic of Iran Shipping Lines (IRISL) and is updating the entries on OFAC's list... as property of the Islamic Republic of Iran Shipping Lines (IRISL) and updated the entries on OFAC's...
Code of Federal Regulations, 2010 CFR
2010-04-01
... concludes that, during the period covered by the review, there were no entries, exports, or sales of the... administrative review under this section will cover, as appropriate, entries, exports, or sales during the period... 19 Customs Duties 3 2010-04-01 2010-04-01 false Administrative review of orders and suspension...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-20
... Items I, II and III below, which Items have been prepared by the Exchange. The Commission is publishing... Management Gateway service (``RMG'') would not be charged for order/ quote entry ports if such ports are... for order/quote entry ports that connect to the Exchange via the DMM Gateway.\\7\\ \\5\\ The Exchange...
Aerodynamic and Aerothermal TPS Instrumentation Reference Guide
NASA Technical Reports Server (NTRS)
Woollard, Bryce A.; Braun, Robert D.; Bose, Deepack
2016-01-01
The hypersonic regime of planetary entry combines the most severe environments that an entry vehicle will encounter with the greatest amount of uncertainty as to the events unfolding during that time period. This combination generally leads to conservatism in the design of an entry vehicle, specifically that of the thermal protection system (TPS). Each planetary entry provides a valuable aerodynamic and aerothermal testing opportunity; the utilization of this opportunity is paramount in better understanding how a specific entry vehicle responds to the demands of the hypersonic entry environment. Previous efforts have been made to instrument entry vehicles in order to collect data during the entry period and reconstruct the corresponding vehicle response. The purpose of this paper is to cumulatively document past TPS instrumentation designs for applicable planetary missions, as well as to list pertinent results and any explainable shortcomings.
Methodology Development for the Reconstruction of the ESA Huygens Probe Entry and Descent Trajectory
NASA Astrophysics Data System (ADS)
Kazeminejad, B.
2005-01-01
The European Space Agency's (ESA) Huygens probe performed a successful entry and descent into Titan's atmosphere on January 14, 2005, and landed safely on the satellite's surface. A methodology was developed, implemented, and tested to reconstruct the Huygens probe trajectory from its various science and engineering measurements, which were performed during the probe's entry and descent to the surface of Titan, Saturn's largest moon. The probe trajectory reconstruction is an essential effort that has to be done as early as possible in the post-flight data analysis phase as it guarantees a correct and consistent interpretation of all the experiment data and furthermore provides a reference set of data for "ground-truthing" orbiter remote sensing measurements. The entry trajectory is reconstructed from the measured probe aerodynamic drag force, which also provides a means to derive the upper atmospheric properties like density, pressure, and temperature. The descent phase reconstruction is based upon a combination of various atmospheric measurements such as pressure, temperature, composition, speed of sound, and wind speed. A significant amount of effort was spent to outline and implement a least-squares trajectory estimation algorithm that provides a means to match the entry and descent trajectory portions in case of discontinuity. An extensive test campaign of the algorithm is presented which used the Huygens Synthetic Dataset (HSDS) developed by the Huygens Project Scientist Team at ESA/ESTEC as a test bed. This dataset comprises the simulated sensor output (and the corresponding measurement noise and uncertainty) of all the relevant probe instruments. The test campaign clearly showed that the proposed methodology is capable of utilizing all the relevant probe data, and will provide the best estimate of the probe trajectory once real instrument measurements from the actual probe mission are available. As a further test case using actual flight data the NASA Mars Pathfinder entry and descent trajectory and the space craft attitude was reconstructed from the 3-axis accelerometer measurements which are archived on the Planetary Data System. The results are consistent with previously published reconstruction efforts.
76 FR 47148 - Application(s) for Duty-Free Entry of Scientific Instruments
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
... work in microbiology and pathology, to study biological materials in order to identify bacterial or viral pathogens with clinical significance in veterinary medicine. Justification for Duty-Free Entry: No...
Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification
NASA Astrophysics Data System (ADS)
Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.
2016-10-01
Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Entry order as a consideration for innovation strategies.
Cohen, Fredric J
2006-04-01
Prior studies have defined an effect of market entry order on commercial success that depends on attributes of the underlying technology, the rate of change in technology improvement, consumer expectations of these attributes and the degree of unmet demand. Analyses of pharmaceutical sales data suggest that the commercial success of drugs is subject to similar forces. These findings have important implications for innovation strategies.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... Specially Designated Nationals and Blocked Persons (``SDN List''). The individual's date of birth has been amended and two addresses and an alternate place of birth have been added to the SDN List entry. The... entry of this individual on the SDN List is effective May 26, 2011. FOR FURTHER INFORMATION CONTACT...
MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data
1983-06-01
segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0
The ConSurf-DB: pre-calculated evolutionary conservation profiles of protein structures.
Goldenberg, Ofir; Erez, Elana; Nimrod, Guy; Ben-Tal, Nir
2009-01-01
ConSurf-DB is a repository for evolutionary conservation analysis of the proteins of known structures in the Protein Data Bank (PDB). Sequence homologues of each of the PDB entries were collected and aligned using standard methods. The evolutionary conservation of each amino acid position in the alignment was calculated using the Rate4Site algorithm, implemented in the ConSurf web server. The algorithm takes into account the phylogenetic relations between the aligned proteins and the stochastic nature of the evolutionary process explicitly. Rate4Site assigns a conservation level for each position in the multiple sequence alignment using an empirical Bayesian inference. Visual inspection of the conservation patterns on the 3D structure often enables the identification of key residues that comprise the functionally important regions of the protein. The repository is updated with the latest PDB entries on a monthly basis and will be rebuilt annually. ConSurf-DB is available online at http://consurfdb.tau.ac.il/
The ConSurf-DB: pre-calculated evolutionary conservation profiles of protein structures
Goldenberg, Ofir; Erez, Elana; Nimrod, Guy; Ben-Tal, Nir
2009-01-01
ConSurf-DB is a repository for evolutionary conservation analysis of the proteins of known structures in the Protein Data Bank (PDB). Sequence homologues of each of the PDB entries were collected and aligned using standard methods. The evolutionary conservation of each amino acid position in the alignment was calculated using the Rate4Site algorithm, implemented in the ConSurf web server. The algorithm takes into account the phylogenetic relations between the aligned proteins and the stochastic nature of the evolutionary process explicitly. Rate4Site assigns a conservation level for each position in the multiple sequence alignment using an empirical Bayesian inference. Visual inspection of the conservation patterns on the 3D structure often enables the identification of key residues that comprise the functionally important regions of the protein. The repository is updated with the latest PDB entries on a monthly basis and will be rebuilt annually. ConSurf-DB is available online at http://consurfdb.tau.ac.il/ PMID:18971256
Overview of the Phoenix Entry, Descent and Landing System Architecture
NASA Technical Reports Server (NTRS)
Grover, Myron R., III; Cichy, Benjamin D.; Desai, Prasun N.
2008-01-01
NASA s Phoenix Mars Lander began its journey to Mars from Cape Canaveral, Florida in August 2007, but its journey to the launch pad began many years earlier in 1997 as NASA s Mars Surveyor Program 2001 Lander. In the intervening years, the entry, descent and landing (EDL) system architecture went through a series of changes, resulting in the system flown to the surface of Mars on May 25th, 2008. Some changes, such as entry velocity and landing site elevation, were the result of differences in mission design. Other changes, including the removal of hypersonic guidance, the reformulation of the parachute deployment algorithm, and the addition of the backshell avoidance maneuver, were driven by constant efforts to augment system robustness. An overview of the Phoenix EDL system architecture is presented along with rationales driving these architectural changes.
Implementing computerized physician order entry: the importance of special people.
Ash, Joan S; Stavri, P Zoë; Dykstra, Richard; Fournier, Lara
2003-03-01
To articulate important lessons learned during a study to identify success factors for implementing computerized physician order entry (CPOE) in inpatient and outpatient settings. Qualitative study by a multidisciplinary team using data from observation, focus groups, and both formal and informal interviews. Data were analyzed using a grounded approach to develop a taxonomy of patterns and themes from the transcripts and field notes. The theme we call Special People is explored here in detail. A taxonomy of types of Special People includes administrative leaders, clinical leaders (champions, opinion leaders, and curmudgeons), and bridgers or support staff who interface directly with users. The recognition and nurturing of Special People should be among the highest priorities of those implementing computerized physician order entry. Their education and training must be a goal of teaching programs in health administration and medical informatics.
A model to capture and manage tacit knowledge using a multiagent system
NASA Astrophysics Data System (ADS)
Paolino, Lilyam; Paggi, Horacio; Alonso, Fernando; López, Genoveva
2014-10-01
This article presents a model to capture and register business tacit knowledge belonging to different sources, using an expert multiagent system which enables the entry of incidences and captures the tacit knowledge which could fix them. This knowledge and their sources are evaluated through the application of trustworthy algorithms that lead to the registration of the data base and the best of each of them. Through its intelligent software agents, this system interacts with the administrator, users, with the knowledge sources and with all the practice communities which might exist in the business world. The sources as well as the knowledge are constantly evaluated, before being registered and also after that, in order to decide the staying or modification of its original weighting. If there is the possibility of better, new knowledge are registered through the old ones. This is also part of an investigation being carried out which refers to knowledge management methodologies in order to manage tacit business knowledge so as to make the business competitiveness easier and leading to innovation learning.
A noniterative greedy algorithm for multiframe point correspondence.
Shafique, Khurram; Shah, Mubarak
2005-01-01
This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... in U.S. customs territory, and (ii) are re-exported within eighteen (18) months of entry of the LEU... amend the scope of the order and to extend the deadline for the re-exportation of this sole LEU entry... transporter(s) while in U.S. customs territory, and (ii) are re-exported within eighteen (18) months of entry...
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
The VA Computerized Patient Record — A First Look
Anderson, Curtis L.; Meldrum, Kevin C.
1994-01-01
In support of its in-house DHCP Physician Order Entry/Results Reporting application, the VA is developing the first edition of a Computerized Patient Record. The system will feature a physician-oriented interface with real time, expert system-based order checking, a controlled vocabulary, a longitudinal repository of patient data, HL7 messaging support, a clinical reminder and warning system, and full integration with existing VA applications including lab, pharmacy, A/D/T, radiology, dietetics, surgery, vitals, allergy tracking, discharge summary, problem list, progress notes, consults, and online physician order entry. PMID:7949886
Study of advanced atmospheric entry systems for Mars
NASA Technical Reports Server (NTRS)
1978-01-01
Entry system designs are described for various advanced Mars missions including sample return, hard lander, and Mars airplane. The Mars exploration systems for sample return and the hard lander require decleration from direct approach entry velocities of about 6 km/s to terminal velocities consistent with surface landing requirements. The Mars airplane entry system is decelerated from orbit at 4.6 km/s to deployment near the surface. Mass performance characteristics of major elements of the Mass performance characteristics are estimated for the major elements of the required entry systems using Viking technology or logical extensions of technology in order to provide a common basis of comparison for the three entry modes mission mode approaches. The entry systems, although not optimized, are based on Viking designs and reflect current hardware performance capability and realistic mass relationships.
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.
Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej
2015-09-01
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
Kennihan, Mary; Zohra, Tatheer; Devi, Radha; Srinivasan, Chitra; Diaz, Josefina; Howard, Bradley S; Braithwaite, Susan S
2012-01-01
The objective was to design electronic order sets that would promote safe, effective, and individualized order entry for subcutaneous insulin in the hospital, based on a review of best practices. Saint Francis Hospital in Evanston, Illinois, a community teaching hospital, was selected as the pilot site for 6 hospitals in the Health Care System to introduce an electronic medical record. Articles dealing with management of hospital hyperglycemia, medical order entry systems, and patient safety were reviewed selectively. In the published literature on institutional glycemic management programs and insulin order sets, features were identified that improve safety and effectiveness of subcutaneous insulin therapy. Subcutaneous electronic insulin order sets were created, designated in short: "patients eating", "patients not eating", and "patients receiving overnight enteral feedings." Together with an option for free text entry, menus of administration instructions were designed within each order set that were applicable to specific insulin orders and expressed in standardized language, such as "hold if tube feeds stop" or "do not withhold." Two design features are advocated for electronic order sets for subcutaneous insulin that will both standardize care and protect individualization. First, within the order sets, the glycemic management plan should be matched to the carbohydrate exposure of the patients, with juxtaposition of appropriate orders for both glucose monitoring and insulin. Second, in order to convey precautions of insulin use to pharmacy and nursing staff, the prescriber must be able to attach administration instructions to specific insulin orders.
Bashiri, Fahad A.; Hamad, Muddathir H.; Amer, Yasser S.; Abouelkheir, Manal M.; Mohamed, Sarar; Kentab, Amal Y.; Salih, Mustafa A.; Nasser, Mohammad N. Al; Al-Eyadhy, Ayman A.; Othman, Mohammed A. Al; Al-Ahmadi, Tahani; Iqbal, Shaikh M.; Somily, Ali M.; Wahabi, Hayfaa A.; Hundallah, Khalid J.; Alwadei, Ali H.; Albaradie, Raidah S.; Al-Twaijri, Waleed A.; Jan, Mohammed M.; Al-Otaibi, Faisal; Alnemri, Abdulrahman M.; Al-Ansary, Lubna A.
2017-01-01
Objective: To increase the use of evidence-based approaches in the diagnosis, investigations and treatment of Convulsive Status Epilepticus (CSE) in children in relevant care settings. Method: A Clinical Practice Guideline (CPG) adaptation group was formulated at a university hospital in Riyadh. The group utilized 2 CPG validated tools including the ADAPTE method and the AGREE II instrument. Results: The group adapted 3 main categories of recommendations from one Source CPG. The recommendations cover; (i)first-line treatment of CSE in the community; (ii)treatment of CSE in the hospital; and (iii)refractory CSE. Implementation tools were built to enhance knowledge translation of these recommendations including a clinical algorithm, audit criteria, and a computerized provider order entry. Conclusion: A clinical practice guideline for the Saudi healthcare context was formulated using a guideline adaptation process to support relevant clinicians managing CSE in children. PMID:28416791
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.
Wu, Xue; Wu, Changxu
2017-01-01
This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Message passing with parallel queue traversal
Underwood, Keith D [Albuquerque, NM; Brightwell, Ronald B [Albuquerque, NM; Hemmert, K Scott [Albuquerque, NM
2012-05-01
In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.
NASA Technical Reports Server (NTRS)
Pastor, P. Rick; Bishop, Robert H.; Striepe, Scott A.
2000-01-01
A first order simulation analysis of the navigation accuracy expected from various Navigation Quick-Look data sets is performed. Here quick-look navigation data are observations obtained by hypothetical telemetried data transmitted on the fly during a Mars probe's atmospheric entry. In this simulation study, navigation data consists of 3-axis accelerometer sensor and attitude information data. Three entry vehicle guidance types are studied: I. a Maneuvering entry vehicle (as with Mars 01 guidance where angle of attack and bank angle are controlled); II. Zero angle-of-attack controlled entry vehicle (as with Mars 98); and III. Ballistic, or spin stabilized entry vehicle (as with Mars Pathfinder);. For each type, sensitivity to progressively under sampled navigation data and inclusion of sensor errors are characterized. Attempts to mitigate the reconstructed trajectory errors, including smoothing, interpolation and changing integrator characteristics are also studied.
Patel, Vijay M; Rains, Anna W; Clark, Christopher T
2016-01-01
To reduce the rate of inappropriate red blood cell transfusion, a provider education program, followed by alerts in the computerized provider order entry system (CPOE), was established to encourage AABB transfusion guidelines. Metrics were established for nonemergent inpatient transfusions. Service lines with high order volume were targeted with formal education regarding AABB 2012 transfusion guidelines. Transfusion orders were reviewed in real time with email communications sent to ordering providers falling outside of AABB recommendations. After 12 months of provider education, alerts were activated in CPOE. With provider education alone, the incidence of pretransfusion hemoglobin levels greater than 8 g/dL decreased from 16.64% to 6.36%, posttransfusion hemoglobin levels greater than 10 g/dL from 14.03% to 3.78%, and number of nonemergent two-unit red blood cell orders from 45.26% to 22.66%. Red blood cell utilization decreased by 13%. No additional significant reduction in nonemergent two-unit orders was observed with CPOE alerts. Provider education, an effective and low-cost method, should be considered as a first-line method for reducing inappropriate red blood cell transfusion rates in stable adult inpatients. Alerts in the computerized order entry system did not significantly lower the percentage of two-unit red blood cells orders but may help to maintain educational efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blocksome, Michael; Kumar, Sameer; Mamidala, Amith R.
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a barrier algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal tomore » the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.« less
Entry Guidance for the 2011 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
Mendeck, Gavin F.; Craig, Lynn E.
2011-01-01
The 2011 Mars Science Laboratory will be the first Mars mission to attempt a guided entry to safely deliver the rover to a touchdown ellipse of 25 km x 20 km. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control the range flown. For application to Mars landers which must make use of the tenuous Martian atmosphere, it is critical to balance the lift of the vehicle to minimize the range error while still ensuring a safe deploy altitude. An overview of the process to generate optimized guidance settings is presented, discussing improvements made over the last nine years. Key dispersions driving deploy ellipse and altitude performance are identified. Performance sensitivities including attitude initialization error and the velocity of transition from range control to heading alignment are presented.
On the Use of a Range Trigger for the Mars Science Laboratory Entry Descent and Landing
NASA Technical Reports Server (NTRS)
Way, David W.
2011-01-01
In 2012, during the Entry, Descent, and Landing (EDL) of the Mars Science Laboratory (MSL) entry vehicle, a 21.5 m Viking-heritage, Disk-Gap-Band, supersonic parachute will be deployed at approximately Mach 2. The baseline algorithm for commanding this parachute deployment is a navigated planet-relative velocity trigger. This paper compares the performance of an alternative range-to-go trigger (sometimes referred to as Smart Chute ), which can significantly reduce the landing footprint size. Numerical Monte Carlo results, predicted by the POST2 MSL POST End-to-End EDL simulation, are corroborated and explained by applying propagation of uncertainty methods to develop an analytic estimate for the standard deviation of Mach number. A negative correlation is shown to exist between the standard deviations of wind velocity and the planet-relative velocity at parachute deploy, which mitigates the Mach number rise in the case of the range trigger.
Kumar, Sameer; Mamidala, Amith R.; Ratterman, Joseph D.; Blocksome, Michael; Miller, Douglas
2013-09-03
A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a bather algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal to the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table.
A programmable rules engine to provide clinical decision support using HTML forms.
Heusinkveld, J; Geissbuhler, A; Sheshelidze, D; Miller, R
1999-01-01
The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser.
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
NASA Astrophysics Data System (ADS)
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Optimizing radiologist e-prescribing of CT oral contrast agent using a protocoling portal.
Wasser, Elliot J; Galante, Nicholas J; Andriole, Katherine P; Farkas, Cameron; Khorasani, Ramin
2013-12-01
The purpose of this study is to quantify the time expenditure associated with radiologist ordering of CT oral contrast media when using an integrated protocoling portal and to determine radiologists' perceptions of the ordering process. This prospective study was performed at a large academic tertiary care facility. Detailed timing information for CT inpatient oral contrast orders placed via the computerized physician order entry (CPOE) system was gathered over a 14-day period. Analyses evaluated the amount of physician time required for each component of the ordering process. Radiologists' perceptions of the ordering process were assessed by survey. Descriptive statistics and chi-square analysis were performed. A total of 96 oral contrast agent orders were placed by 13 radiologists during the study period. The average time necessary to create a protocol for each case was 40.4 seconds (average range by subject, 20.0-130.0 seconds; SD, 37.1 seconds), and the average total time to create and sign each contrast agent order was 27.2 seconds (range, 10.0-50.0 seconds; SD, 22.4 seconds). Overall, 52.5% (21/40) of survey respondents indicated that radiologist entry of oral contrast agent orders improved patient safety. A minority of respondents (15% [6/40]) indicated that contrast agent order entry was either very or extremely disruptive to workflow. Radiologist e-prescribing of CT oral contrast agents using CPOE can be embedded in a protocol workflow. Integration of health IT tools can help to optimize user acceptance and adoption.
Dictionary Learning Algorithms for Sparse Representation
Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811
2012-10-03
ISS033-E-009232 (3 Oct. 2012) --- This still photo taken by the Expedition 33 crew members aboard the International Space Station shows evidence of the fiery plunge through Earth?s atmosphere and the destructive re-entry of the European Automated Transfer Vehicle-3 (ATV-3) spacecraft, also known as ?Edoardo Amaldi.? The end of the ATV took place over a remote swath of the Pacific Ocean where any surviving debris safely splashed down a short time later, at around 1:30 a.m. (GMT) on Oct. 3, thus concluding the highly successful ATV-3 mission. Aboard the craft during re-entry was the Re Entry Breakup Recorder (REBR), a spacecraft ?black box? designed to gather data on vehicle disintegration during re-entry in order to improve future spacecraft re-entry models.
Minimization of Delay Costs in the Realization of Production Orders in Two-Machine System
NASA Astrophysics Data System (ADS)
Dylewski, Robert; Jardzioch, Andrzej; Dworak, Oliver
2018-03-01
The article presents a new algorithm that enables the allocation of the optimal scheduling of the production orders in the two-machine system based on the minimum cost of order delays. The formulated algorithm uses the method of branch and bounds and it is a particular generalisation of the algorithm enabling for the determination of the sequence of the production orders with the minimal sum of the delays. In order to illustrate the proposed algorithm in the best way, the article contains examples accompanied by the graphical trees of solutions. The research analysing the utility of the said algorithm was conducted. The achieved results proved the usefulness of the proposed algorithm when applied to scheduling of orders. The formulated algorithm was implemented in the Matlab programme. In addition, the studies for different sets of production orders were conducted.
Generating Hierarchical Document Indices from Common Denominators in Large Document Collections.
ERIC Educational Resources Information Center
O'Kane, Kevin C.
1996-01-01
Describes an algorithm for computer generation of hierarchical indexes for document collections. The resulting index, when presented with a graphical interface, provides users with a view of a document collection that permits general browsing and informal search activities via an access method that requires no keyboard entry or prior knowledge of…
Generation, annotation and analysis of ESTs from Trichoderma harzianum CECT 2413
Vizcaíno, Juan Antonio; González, Francisco Javier; Suárez, M Belén; Redondo, José; Heinrich, Julian; Delgado-Jarana, Jesús; Hermosa, Rosa; Gutiérrez, Santiago; Monte, Enrique; Llobell, Antonio; Rey, Manuel
2006-01-01
Background The filamentous fungus Trichoderma harzianum is used as biological control agent of several plant-pathogenic fungi. In order to study the genome of this fungus, a functional genomics project called "TrichoEST" was developed to give insights into genes involved in biological control activities using an approach based on the generation of expressed sequence tags (ESTs). Results Eight different cDNA libraries from T. harzianum strain CECT 2413 were constructed. Different growth conditions involving mainly different nutrient conditions and/or stresses were used. We here present the analysis of the 8,710 ESTs generated. A total of 3,478 unique sequences were identified of which 81.4% had sequence similarity with GenBank entries, using the BLASTX algorithm. Using the Gene Ontology hierarchy, we performed the annotation of 51.1% of the unique sequences and compared its distribution among the gene libraries. Additionally, the InterProScan algorithm was used in order to further characterize the sequences. The identification of the putatively secreted proteins was also carried out. Later, based on the EST abundance, we examined the highly expressed genes and a hydrophobin was identified as the gene expressed at the highest level. We compared our collection of ESTs with the previous collections obtained from Trichoderma species and we also compared our sequence set with different complete eukaryotic genomes from several animals, plants and fungi. Accordingly, the presence of similar sequences in different kingdoms was also studied. Conclusion This EST collection and its annotation provide a significant resource for basic and applied research on T. harzianum, a fungus with a high biotechnological interest. PMID:16872539
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
An open experimental database for exploring inorganic materials
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; ...
2018-04-03
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less
An open experimental database for exploring inorganic materials.
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb
2018-04-03
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.
An open experimental database for exploring inorganic materials
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D.; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb
2018-01-01
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource. PMID:29611842
An open experimental database for exploring inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less
Investigation of rat exploratory behavior via evolving artificial neural networks.
Costa, Ariadne de Andrade; Tinós, Renato
2016-09-01
Neuroevolution comprises the use of evolutionary computation to define the architecture and/or to train artificial neural networks (ANNs). This strategy has been employed to investigate the behavior of rats in the elevated plus-maze, which is a widely used tool for studying anxiety in mice and rats. Here we propose a neuroevolutionary model, in which both the weights and the architecture of artificial neural networks (our virtual rats) are evolved by a genetic algorithm. This model is an improvement of a previous model that involves the evolution of just the weights of the ANN by the genetic algorithm. In order to compare both models, we analyzed traditional measures of anxiety behavior, like the time spent and the number of entries in both open and closed arms of the maze. When compared to real rat data, our findings suggest that the results from the model introduced here are statistically better than those from other models in the literature. In this way, the neuroevolution of architecture is clearly important for the development of the virtual rats. Moreover, this technique allowed the comprehension of the importance of different sensory units and different number of hidden neurons (performing as memory) in the ANNs (virtual rats). Copyright © 2016 Elsevier B.V. All rights reserved.
Effect of closed-loop order processing on the time to initial antimicrobial therapy.
Panosh, Nicole; Rew, Richardd; Sharpe, Michelle
2012-08-15
The results of a study comparing the average time to initiation of i.v. antimicrobial therapy with closed-versus open-loop order entry and processing are reported. A retrospective cohort study was performed to compare order-to-administration times for initial doses of i.v. antimicrobials before and after a closed-loop order-processing system including computerized prescriber order entry (CPOE) was implemented at a large medical center. A total of 741 i.v. antimicrobial administrations to adult patients during designated five-month preimplementation and postimplementation study periods were assessed. Drug-use reports generated by the pharmacy database were used to identify order-entry times, and medication administration records were reviewed to determine times of i.v. antimicrobial administration. The mean ± S.D. order-to-administration times before and after the implementation of the CPOE system and closed-loop order processing were 3.18 ± 2.60 and 2.00 ± 1.89 hours, respectively, a reduction of 1.18 hours (p < 0.0001). Closed-loop order processing was associated with significant reductions in the average time to initiation of i.v. therapy in all patient care areas evaluated (cardiology, general medicine, and oncology). The study results suggest that CPOE-based closed-loop order processing can play an important role in achieving compliance with current practice guidelines calling for increased efforts to ensure the prompt initiation of i.v. antimicrobials for severe infections (e.g., sepsis, meningitis). Implementation of a closed-loop order-processing system resulted in a significant decrease in order-to-administration times for i.v. antimicrobial therapy.
Smith, Matthew; Triulzi, Darrell J; Yazer, Mark H; Rollins-Raval, Marian A; Waters, Jonathan H; Raval, Jay S
2014-12-01
Prescriber adherence to institutional blood component ordering guidelines can be low. The goal of this study was to decrease red blood cell (RBC) and plasma orders that did not meet institutional transfusion guidelines by using data within the laboratory information system to trigger alerts in the computerized order entry (CPOE) system at the time of order entry. At 10 hospitals within a regional health care system, discernment rules were created for RBC and plasma orders utilizing transfusion triggers of hemoglobin <8 gm/dl and INR >1.6, respectively, with subsequent alert generation that appears within the CPOE system when a prescriber attempts to order RBCs or plasma on a patient whose antecedent laboratory values do not suggest that a transfusion is indicated. Orders and subsequent alerts were tracked for RBCs and plasma over evaluation periods of 15 and 10 months, respectively, along with the hospital credentials of the ordering health care providers (physician or nurse). Alerts triggered which were heeded remained steady and averaged 11.3% for RBCs and 19.6% for plasma over the evaluation periods. Overall, nurses and physicians canceled statistically identical percentages of alerted RBC (10.9% vs. 11.5%; p = 0.78) and plasma (21.3% vs. 18.7%; p = 0.22) orders. Implementing a simple evidence-based transfusion alert system at the time of order entry decreased non-evidence based transfusion orders by both nurse and physician providers. Copyright © 2014 Elsevier Ltd. All rights reserved.
1984-06-01
preceding the corresponding pressure group of the surface thermochemistry deck as described below. The temperature entries within each section must be... pressure group the transfer coefficient values will be ordered. Within each transfer coefficient section, ablation rate entries need not he ordered in any...may not exceed 5 (and may be only I); the number of transfer coefficient values in each pressure group may not exceed 5 but may be only 1. If no
Flowfield computation of entry vehicles
NASA Technical Reports Server (NTRS)
Prabhu, Dinesh K.
1990-01-01
The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.
PDB_TM: selection and membrane localization of transmembrane proteins in the protein data bank.
Tusnády, Gábor E; Dosztányi, Zsuzsanna; Simon, István
2005-01-01
PDB_TM is a database for transmembrane proteins with known structures. It aims to collect all transmembrane proteins that are deposited in the protein structure database (PDB) and to determine their membrane-spanning regions. These assignments are based on the TMDET algorithm, which uses only structural information to locate the most likely position of the lipid bilayer and to distinguish between transmembrane and globular proteins. This algorithm was applied to all PDB entries and the results were collected in the PDB_TM database. By using TMDET algorithm, the PDB_TM database can be automatically updated every week, keeping it synchronized with the latest PDB updates. The PDB_TM database is available at http://www.enzim.hu/PDB_TM.
Inhibition of Dengue Virus Entry into Target Cells Using Synthetic Antiviral Peptides
Alhoot, Mohammed Abdelfatah; Rathinam, Alwin Kumar; Wang, Seok Mui; Manikam, Rishya; Sekaran, Shamala Devi
2013-01-01
Despite the importance of DENV as a human pathogen, there is no specific treatment or protective vaccine. Successful entry into the host cells is necessary for establishing the infection. Recently, the virus entry step has become an attractive therapeutic strategy because it represents a barrier to suppress the onset of the infection. Four putative antiviral peptides were designed to target domain III of DENV-2 E protein using BioMoDroid algorithm. Two peptides showed significant inhibition of DENV when simultaneously incubated as shown by plaque formation assay, RT-qPCR, and Western blot analysis. Both DET4 and DET2 showed significant inhibition of virus entry (84.6% and 40.6% respectively) using micromolar concentrations. Furthermore, the TEM images showed that the inhibitory peptides caused structural abnormalities and alteration of the arrangement of the viral E protein, which interferes with virus binding and entry. Inhibition of DENV entry during the initial stages of infection can potentially reduce the viremia in infected humans resulting in prevention of the progression of dengue fever to the severe life-threatening infection, reduce the infected vector numbers, and thus break the transmission cycle. Moreover these peptides though designed against the conserved region in DENV-2 would have the potential to be active against all the serotypes of dengue and might be considered as Hits to begin designing and developing of more potent analogous peptides that could constitute as promising therapeutic agents for attenuating dengue infection. PMID:23630436
Chiu, Shih-Hau; Chen, Chien-Chi; Yuan, Gwo-Fang; Lin, Thy-Hou
2006-06-15
The number of sequences compiled in many genome projects is growing exponentially, but most of them have not been characterized experimentally. An automatic annotation scheme must be in an urgent need to reduce the gap between the amount of new sequences produced and reliable functional annotation. This work proposes rules for automatically classifying the fungus genes. The approach involves elucidating the enzyme classifying rule that is hidden in UniProt protein knowledgebase and then applying it for classification. The association algorithm, Apriori, is utilized to mine the relationship between the enzyme class and significant InterPro entries. The candidate rules are evaluated for their classificatory capacity. There were five datasets collected from the Swiss-Prot for establishing the annotation rules. These were treated as the training sets. The TrEMBL entries were treated as the testing set. A correct enzyme classification rate of 70% was obtained for the prokaryote datasets and a similar rate of about 80% was obtained for the eukaryote datasets. The fungus training dataset which lacks an enzyme class description was also used to evaluate the fungus candidate rules. A total of 88 out of 5085 test entries were matched with the fungus rule set. These were otherwise poorly annotated using their functional descriptions. The feasibility of using the method presented here to classify enzyme classes based on the enzyme domain rules is evident. The rules may be also employed by the protein annotators in manual annotation or implemented in an automatic annotation flowchart.
Code of Federal Regulations, 2011 CFR
2011-01-01
... successive re-delegation, the terms mean, to the extent that authority has been delegated to such official... having changed. Such status terminates upon entry of a final administrative order of exclusion... come into the United States at a port-of-entry, or an alien seeking transit through the United States...
Optimal Use of Available Claims to Identify a Medicare Population Free of Coronary Heart Disease
Kent, Shia T.; Safford, Monika M.; Zhao, Hong; Levitan, Emily B.; Curtis, Jeffrey R.; Kilpatrick, Ryan D.; Kilgore, Meredith L.; Muntner, Paul
2015-01-01
We examined claims-based approaches for identifying a study population free of coronary heart disease (CHD) using data from 8,937 US blacks and whites enrolled during 2003–2007 in a prospective cohort study linked to Medicare claims. Our goal was to minimize the percentage of persons at study entry with self-reported CHD (previous myocardial infarction or coronary revascularization). We assembled 6 cohorts without CHD claims by requiring 6 months, 1 year, or 2 years of continuous Medicare fee-for-service insurance coverage prior to study entry and using either a fixed-window or all-available look-back period. We examined adding CHD-related claims to our “base algorithm,” which included claims for myocardial infarction and coronary revascularization. Using a 6-month fixed-window look-back period, 17.8% of participants without claims in the base algorithm reported having CHD. This was reduced to 3.6% using an all-available look-back period and adding other CHD claims to the base algorithm. Among cohorts using all-available look-back periods, increasing the length of continuous coverage from 6 months to 1 or 2 years reduced the sample size available without lowering the percentage of persons with self-reported CHD. This analysis demonstrates approaches for developing a CHD-free cohort using Medicare claims. PMID:26443420
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Optimizing the Learning Order of Chinese Characters Using a Novel Topological Sort Algorithm
Wang, Jinzhao
2016-01-01
We present a novel algorithm for optimizing the order in which Chinese characters are learned, one that incorporates the benefits of learning them in order of usage frequency and in order of their hierarchal structural relationships. We show that our work outperforms previously published orders and algorithms. Our algorithm is applicable to any scheduling task where nodes have intrinsic differences in importance and must be visited in topological order. PMID:27706234
Siebeneck, Laura K; Cova, Thomas J
2012-09-01
Developing effective evacuation and return-entry plans requires understanding the spatial and temporal dimensions of risk perception experienced by evacuees throughout a disaster event. Using data gathered from the 2008 Cedar Rapids, Iowa Flood, this article explores how risk perception and location influence evacuee behavior during the evacuation and return-entry process. Three themes are discussed: (1) the spatial and temporal characteristics of risk perception throughout the evacuation and return-entry process, (2) the relationship between risk perception and household compliance with return-entry orders, and (3) the role social influences have on the timing of the return by households. The results indicate that geographic location and spatial variation of risk influenced household risk perception and compliance with return-entry plans. In addition, sociodemographic characteristics influenced the timing and characteristics of the return groups. The findings of this study advance knowledge of evacuee behavior throughout a disaster and can inform strategies used by emergency managers throughout the evacuation and return-entry process. © 2012 Society for Risk Analysis.
A programmable rules engine to provide clinical decision support using HTML forms.
Heusinkveld, J.; Geissbuhler, A.; Sheshelidze, D.; Miller, R.
1999-01-01
The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser. Images Figure 1 PMID:10566470
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Onboard Determination of Vehicle Glide Capability for Shuttle Abort Flight Managment (SAFM)
NASA Technical Reports Server (NTRS)
Straube, Timothy; Jackson, Mark; Fill, Thomas; Nemeth, Scott
2002-01-01
When one or more main engines fail during ascent, the flight crew of the Space Shuttle must make several critical decisions and accurately perform a series of abort procedures. One of the most important decisions for many aborts is the selection ofa landing site. Several factors influence the ability to reach a landing site, including the spacecraft point of atmospheric entry, the energy state at atmospheric entry, the vehicle glide capability from that energy state, and whether one or more suitable landing sites are within the glide capability. Energy assessment is further complicated by the fact that phugoid oscillations in total energy influence glide capability. Once the glide capability is known, the crew must select the "best" site option based upon glide capability and landing site conditions and facilities. Since most of these factors cannot currently be assessed by the crew in flight, extensive planning is required prior to each mission to script a variety of procedures based upon spacecraft velocity at the point of engine failure (or failures). The results of this preflight planning are expressed in tables and diagrams on mission-specific cockpit checklists. Crew checklist procedures involve leafing through several pages of instructions and navigating a decision tree for site selection and flight procedures - all during a time critical abort situation. With the advent of the Cockpit Avionics Upgrade (CAU), the Shuttle will have increased on-board computational power to help alleviate crew workload during aborts and provide valuable situational awareness during nominal operations. One application baselined for the CAU computers is Shuttle Abort Flight Management (SAFM), whose requirements have been designed and prototyped. The SAFM application includes powered and glided flight algorithms. This paper describes the glided flight algorithm which is dispatched by SAFM to determine the vehicle glide capability and make recommendations to the crew for site selection as well as to monitor glide capability while in route to the selected site. Background is provided on Shuttle entry guidance as well as the various types of Shuttle aborts. SAFM entry requirements and cockpit disp lays are discussed briefly to provide background for Glided Flight algorithm design considerations. The central principal of the Glided Flight algorithm is the use of energy-over-weight (EOW) curves to determine range and crossrange boundaries. The major challenges of this technique are exo-atmospheric flight, and phugoid oscillations in energy. During exo-atmospheric flight, energy is constant, so vehicle EOW is not sufficient to determine glide capability. The paper describes how the exo-atmospheric problem is solved by propagating the vehicle state to an "atmospheric pullout" state defined by Shuttle guidance parameters.
Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture
ERIC Educational Resources Information Center
West, Greg L.; Anderson, Adam A. K.; Pratt, Jay
2009-01-01
Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-23
... Proposed Rule Change To Eliminate the Validated Cross Trade Entry Functionality December 16, 2010. Pursuant... eliminate the Validated Cross Trade Entry Functionality for Exchange-registered Institutional Brokers. The... Brokers (``Institutional Brokers'') by eliminating the ability of an Institutional Broker to execute...
19 CFR 141.68 - Time of entry.
Code of Federal Regulations, 2014 CFR
2014-04-01
... (pursuant to § 24.25 of this chapter) have been successfully received by CBP via the Automated Broker... from warehouse for consumption. The time of entry of merchandise withdrawn from warehouse for... the order of the warehouse proprietor) is when: (1) CBP Form 7501 is executed in proper form and filed...
19 CFR 141.68 - Time of entry.
Code of Federal Regulations, 2010 CFR
2010-04-01
... (pursuant to § 24.25 of this chapter) have been successfully received by CBP via the Automated Broker... from warehouse for consumption. The time of entry of merchandise withdrawn from warehouse for... the order of the warehouse proprietor) is when: (1) CBP Form 7501 is executed in proper form and filed...
19 CFR 141.68 - Time of entry.
Code of Federal Regulations, 2012 CFR
2012-04-01
... (pursuant to § 24.25 of this chapter) have been successfully received by CBP via the Automated Broker... from warehouse for consumption. The time of entry of merchandise withdrawn from warehouse for... the order of the warehouse proprietor) is when: (1) CBP Form 7501 is executed in proper form and filed...
19 CFR 141.68 - Time of entry.
Code of Federal Regulations, 2011 CFR
2011-04-01
... (pursuant to § 24.25 of this chapter) have been successfully received by CBP via the Automated Broker... from warehouse for consumption. The time of entry of merchandise withdrawn from warehouse for... the order of the warehouse proprietor) is when: (1) CBP Form 7501 is executed in proper form and filed...
19 CFR 141.68 - Time of entry.
Code of Federal Regulations, 2013 CFR
2013-04-01
... (pursuant to § 24.25 of this chapter) have been successfully received by CBP via the Automated Broker... from warehouse for consumption. The time of entry of merchandise withdrawn from warehouse for... the order of the warehouse proprietor) is when: (1) CBP Form 7501 is executed in proper form and filed...
An experimental SMI adaptive antenna array simulator for weak interfering signals
NASA Technical Reports Server (NTRS)
Dilsavor, Ronald S.; Gupta, Inder J.
1991-01-01
An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.
Algorithmic tools for interpreting vital signs.
Rathbun, Melina C; Ruth-Sahd, Lisa A
2009-07-01
Today's complex world of nursing practice challenges nurse educators to develop teaching methods that promote critical thinking skills and foster quick problem solving in the novice nurse. Traditional pedagogies previously used in the classroom and clinical setting are no longer adequate to prepare nursing students for entry into practice. In addition, educators have expressed frustration when encouraging students to apply newly learned theoretical content to direct the care of assigned patients in the clinical setting. This article presents algorithms as an innovative teaching strategy to guide novice student nurses in the interpretation and decision making related to vital sign assessment in an acute care setting.
Computing sparse derivatives and consecutive zeros problem
NASA Astrophysics Data System (ADS)
Chandra, B. V. Ravi; Hossain, Shahadat
2013-02-01
We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.
A computational intelligence approach to the Mars Precision Landing problem
NASA Astrophysics Data System (ADS)
Birge, Brian Kent, III
Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over the regular PSO algorithm, allowing tracking of nonstationary error functions is detailed. Continued refinement of PSO in the larger research community comes from attempts to understand human-human social interaction as well as analysis of the emergent behavior. Using PSO and the parafoil scenario, optimized reference trajectories are created for an initial condition set of 76 states, representing the convex hull of 2001 states from an early Monte Carlo analysis. The controls are a set series of bank angles followed by a set series of 3DOF thrust vectoring. The reference trajectories are used to train an Artificial Neural Network Reference Trajectory Generator (ANNTraG), with the (marginal) ability to generalize a trajectory from initial conditions it has never been presented. The controls here allow continuous change in bank angle as well as thrust vector. The optimized reference trajectories represent the best achievable trajectory given the initial condition. Steps toward a closed loop neural controller with online learning updates are examined. The inner loop of the simulation employs the Program to Optimize Simulated Trajectories (POST) as the basic model, containing baseline dynamics and state generation. This is controlled from a MATLAB shell that directs the optimization, learning, and control strategy. Using mainly bank angle guidance coupled with CI strategies, the set of achievable reference trajectories are shown to be 88% under 10 meters, a significant improvement in the state of the art. Further, the automatic real-time generation of realistic reference trajectories in the presence of unknown initial conditions is shown to have promise. The closed loop CI guidance strategy is outlined. An unexpected advance came from the effort to optimize the optimization, where the PSO algorithm was improved with the capability for tracking a changing error environment.
An algorithm to count the number of repeated patient data entries with B tree.
Okada, M; Okada, M
1985-04-01
An algorithm to obtain the number of different values that appear a specified number of times in a given data field of a given data file is presented. Basically, a well-known B-tree structure is employed in this study. Some modifications were made to the basic B-tree algorithm. The first step of the modifications is to allow a data item whose values are not necessary distinct from one record to another to be used as a primary key. When a key value is inserted, the number of previous appearances is counted. At the end of all the insertions, the number of key values which are unique in the tree, the number of key values which appear twice, three times, and so forth are obtained. This algorithm is especially powerful for a large size file in disk storage.
Fast Inference with Min-Sum Matrix Product.
Felzenszwalb, Pedro F; McAuley, Julian J
2011-12-01
The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.
Ada Quality and Style: Guidelines for Professional Programmers
1991-01-01
occured because entry queues are serviced in FIFO order, not by priority. There is another situation referred to as a race condition. A program like the...the value of ’COUNT. A task can be removed from an entry queue due to execution of an abort statement as well as expiration of a timed entry call. The...is not defined by the language and may vary from time sliced to preemptive priority. Some implementations (e.g., VAX Ada) provide several choices
The role of laboratory in ensuring appropriate test requests.
Ferraro, Simona; Panteghini, Mauro
2017-07-01
This review highlights the role of laboratory professionals and the strategies to be promoted in strict cooperation with clinicians for auditing, monitoring and improving the appropriateness of test request. The introduction of local pathways and care maps in agreement with international and national guidelines as well as the implementation of reflex testing and algorithms have a central role in guiding test request and in correcting the overuse/misuse of tests. Furthermore, removing obsolete tests from laboratory menu and vetting of restricted tests is recommended to increase cost-effectiveness. This saves costs and permits to introduce new biomarkers with increased diagnostic accuracy with a better impact on patient outcome. An additional issue is concerning the periodicity of (re)testing, accounting that only a minority of tests may be ordered as often as necessary. In the majority of cases, a minimum retesting interval should be introduced. The availability of effective computerised order entry systems is relevant in ensuring appropriate test requests and in providing an aid by automated rules that may stop inappropriate requests before they reach the laboratory. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
An Automated Method to Generate e-Learning Quizzes from Online Language Learner Writing
ERIC Educational Resources Information Center
Flanagan, Brendan; Yin, Chengjiu; Hirokawa, Sachio; Hashimoto, Kiyota; Tabata, Yoshiyuki
2013-01-01
In this paper, the entries of Lang-8, which is a Social Networking Site (SNS) site for learning and practicing foreign languages, were analyzed and found to contain similar rates of errors for most error categories reported in previous research. These similarly rated errors were then processed using an algorithm to determine corrections suggested…
ERIC Educational Resources Information Center
Dobbs, David E.
2012-01-01
This note explains how Emil Artin's proof that row rank equals column rank for a matrix with entries in a field leads naturally to the formula for the nullity of a matrix and also to an algorithm for solving any system of linear equations in any number of variables. This material could be used in any course on matrix theory or linear algebra.
Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns
2013-01-01
Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries.
Joosten, Robbie P; Womack, Thomas; Vriend, Gert; Bricogne, Gérard
2009-02-01
The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation and difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.
Transactional Database Transformation and Its Application in Prioritizing Human Disease Genes
Xiang, Yang; Payne, Philip R.O.; Huang, Kun
2013-01-01
Binary (0,1) matrices, commonly known as transactional databases, can represent many application data, including gene-phenotype data where “1” represents a confirmed gene-phenotype relation and “0” represents an unknown relation. It is natural to ask what information is hidden behind these “0”s and “1”s. Unfortunately, recent matrix completion methods, though very effective in many cases, are less likely to infer something interesting from these (0,1)-matrices. To answer this challenge, we propose IndEvi, a very succinct and effective algorithm to perform independent-evidence-based transactional database transformation. Each entry of a (0,1)-matrix is evaluated by “independent evidence” (maximal supporting patterns) extracted from the whole matrix for this entry. The value of an entry, regardless of its value as 0 or 1, has completely no effect for its independent evidence. The experiment on a gene-phenotype database shows that our method is highly promising in ranking candidate genes and predicting unknown disease genes. PMID:21422495
Physician Order Entry Clerical Support Improves Physician Satisfaction and Productivity.
Contratto, Erin; Romp, Katherine; Estrada, Carlos A; Agne, April; Willett, Lisa L
2017-05-01
To examine the impact of clerical support personnel for physician order entry on physician satisfaction, productivity, timeliness with electronic health record (EHR) documentation, and physician attitudes. All seven part-time physicians at an academic general internal medicine practice were included in this quasi-experimental (single group, pre- and postintervention) mixed-methods study. One full-time clerical support staff member was trained and hired to enter physician orders in the EHR and conduct previsit planning. Physician satisfaction, productivity, timeliness with EHR documentation, and physician attitudes toward the intervention were measured. Four months after the intervention, physicians reported improvements in overall quality of life (good quality, 71%-100%), personal balance (43%-71%), and burnout (weekly, 43%-14%; callousness, 14%-0%). Matched for quarter, productivity increased: work relative value unit (wRVU) per session increased by 20.5% (before, April-June 2014; after, April-June 2015; range -9.2% to 27.5%). Physicians reported feeling more supported, more focused on patient care, and less stressed and fatigued after the intervention. This study supports the use of physician order entry clerical personnel as a simple, cost-effective intervention to improve the work lives of primary care physicians.
Journal of Human Services Abstracts. Volume 3, Number 3.
ERIC Educational Resources Information Center
Department of Health, Education, and Welfare, Washington, DC. Project Share.
This index, containing 450 abstracts on human services, is published quarterly to make available a broad range of documents to those responsible for the planning, management, and delivery of human services. The entries are arranged alphabetically by title and indexed by subject matter. Each entry includes the title, order number, source, price,…
31 CFR 357.20 - Securities account in Legacy Treasury Direct ®.
Code of Federal Regulations, 2011 CFR
2011-07-01
... number. (c) If a bill is transferred from one Legacy Treasury Direct account to another, the price shown...-ENTRY TREASURY BONDS, NOTES AND BILLS HELD IN TREASURY/RESERVE AUTOMATED DEBT ENTRY SYSTEM (TRADES) AND... securities portfolio associated with an account master record. (c) Account master record. In order for a...
31 CFR 357.20 - Securities account in Legacy Treasury Direct ®.
Code of Federal Regulations, 2014 CFR
2014-07-01
... number. (c) If a bill is transferred from one Legacy Treasury Direct account to another, the price shown... BOOK-ENTRY TREASURY BONDS, NOTES AND BILLS HELD IN TREASURY/RESERVE AUTOMATED DEBT ENTRY SYSTEM (TRADES... the securities portfolio associated with an account master record. (c) Account master record. In order...
31 CFR 357.20 - Securities account in Legacy Treasury Direct ®.
Code of Federal Regulations, 2013 CFR
2013-07-01
... number. (c) If a bill is transferred from one Legacy Treasury Direct account to another, the price shown...-ENTRY TREASURY BONDS, NOTES AND BILLS HELD IN TREASURY/RESERVE AUTOMATED DEBT ENTRY SYSTEM (TRADES) AND... securities portfolio associated with an account master record. (c) Account master record. In order for a...
31 CFR 357.20 - Securities account in Legacy Treasury Direct ®.
Code of Federal Regulations, 2012 CFR
2012-07-01
... number. (c) If a bill is transferred from one Legacy Treasury Direct account to another, the price shown...-ENTRY TREASURY BONDS, NOTES AND BILLS HELD IN TREASURY/RESERVE AUTOMATED DEBT ENTRY SYSTEM (TRADES) AND... securities portfolio associated with an account master record. (c) Account master record. In order for a...
9 CFR 93.424 - Import permits and applications for inspection of ruminants.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the veterinary inspector at the port of entry an application, in writing, for inspection, so that the veterinary inspector and customs representatives may make mutually satisfactory arrangements for the orderly... as required in § 93.427(d) shall be presented to the veterinary inspector at the port of entry when...
9 CFR 93.424 - Import permits and applications for inspection of ruminants.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the veterinary inspector at the port of entry an application, in writing, for inspection, so that the veterinary inspector and customs representatives may make mutually satisfactory arrangements for the orderly... as required in § 93.427(d) shall be presented to the veterinary inspector at the port of entry when...
Advances in Procedural Techniques - Antegrade
Wilson, William; Spratt, James C.
2014-01-01
There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the “hybrid’ approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited “interventional” collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Ablation and Chemical Alteration of Cosmic Dust Particles during Entry into the Earth’s Atmosphere
NASA Astrophysics Data System (ADS)
Rudraswami, N. G.; Shyam Prasad, M.; Dey, S.; Plane, J. M. C.; Feng, W.; Carrillo-Sánchez, J. D.; Fernandes, D.
2016-12-01
Most dust-sized cosmic particles undergo ablation and chemical alteration during atmospheric entry, which alters their original properties. A comprehensive understanding of this process is essential in order to decipher their pre-entry characteristics. The purpose of the study is to illustrate the process of vaporization of different elements for various entry parameters. The numerical results for particles of various sizes and various zenith angles are treated in order to understand the changes in chemical composition that the particles undergo as they enter the atmosphere. Particles with large sizes (> few hundred μm) and high entry velocities (>16 km s‑1) experience less time at peak temperatures compared to those that have lower velocities. Model calculations suggest that particles can survive with an entry velocity of 11 km s‑1 and zenith angles (ZA) of 30°–90°, which accounts for ∼66% of the region where particles retain their identities. Our results suggest that the changes in chemical composition of MgO, SiO2, and FeO are not significant for an entry velocity of 11 km s‑1 and sizes <300 μm, but the changes in these compositions become significant beyond this size, where FeO is lost to a major extent. However, at 16 km s‑1 the changes in MgO, SiO2, and FeO are very intense, which is also reflected in Mg/Si, Fe/Si, Ca/Si, and Al/Si ratios, even for particles with a size of 100 μm. Beyond 400 μm particle sizes at 16 km s‑1, most of the major elements are vaporized, leaving the refractory elements, Al and Ca, suspended in the troposphere.
CPOE in Iran--a viable prospect? Physicians' opinions on using CPOE in an Iranian teaching hospital.
Kazemi, Alireza; Ellenius, Johan; Tofighi, Shahram; Salehi, Aref; Eghbalian, Fatemeh; Fors, Uno G
2009-03-01
In recent years, the theory that on-line clinical decision support systems can improve patients' safety among hospitalised individuals has gained greater acceptance. However, the feasibility of implementing such a system in a middle or low-income country has rarely been studied. Understanding the current prescription process and a proper needs assessment of prescribers can act as the key to successful implementation. The aim of this study was to explore physicians' opinions on the current prescription process, and the expected benefits and perceived obstacles to employ Computerised Physician Order Entry in an Iranian teaching hospital. Initially, the interview guideline was developed through focus group discussions with eight experts. Then semi-structured interviews were held with 19 prescribers. After verbatim transcription, inductive thematic analysis was performed on empirical data. Forty hours of on-looker observations were performed in different wards to explore the current prescription process. The current prescription process was identified as a physician-centred, top-down, model, where prescribers were found to mostly rely on their memories as well as being overconfident. Some errors may occur during different paper-based registrations, transcriptions and transfers. Physician opinions on Computerised Physician Order Entry were categorised into expected benefits and perceived obstacles. Confidentiality issues, reduction of medication errors and educational benefits were identified as three themes in the expected benefits category. High cost, social and cultural barriers, data entry time and problems with technical support emerged as four themes in the perceived obstacles category. The current prescription process has a high possibility of medication errors. Although there are different barriers confronting the implementation and continuation of Computerised Physician Order Entry in Iranian hospitals, physicians have a willingness to use them if these systems provide significant benefits. A pilot study in a limited setting and a comprehensive analysis of health outcomes and economic indicators should be performed, to assess the merits of introducing Computerised Physician Order Entry with decision support capabilities in Iran.
Analytic Guidance for the First Entry in a Skip Atmospheric Entry
NASA Technical Reports Server (NTRS)
Garcia-Llama, Eduardo
2007-01-01
This paper presents an analytic method to generate a reference drag trajectory for the first entry portion of a skip atmospheric entry. The drag reference, expressed as a polynomial function of the velocity, will meet the conditions necessary to fit the requirements of the complete entry phase. The generic method proposed to generate the drag reference profile is further simplified by thinking of the drag and the velocity as density and cumulative distribution functions respectively. With this notion it will be shown that the reference drag profile can be obtained by solving a linear algebraic system of equations. The resulting drag profile is flown using the feedback linearization method of differential geometric control as guidance law with the error dynamics of a second order homogeneous equation in the form of a damped oscillator. This approach was first proposed as a revisited version of the Space Shuttle Orbiter entry guidance. However, this paper will show that it can be used to fly the first entry in a skip entry trajectory. In doing so, the gains in the error dynamics will be changed at a certain point along the trajectory to improve the tracking performance.
46 CFR Section 1 - What this order does.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 8 2014-10-01 2014-10-01 false What this order does. Section 1 Section 1 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION A-NATIONAL SHIPPING AUTHORITY GENERAL AGENT'S RESPONSIBILITY IN CONNECTION WITH FOREIGN REPAIR CUSTOM'S ENTRIES Section 1 What this order does. This order...
46 CFR Section 1 - What this order does.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 8 2013-10-01 2013-10-01 false What this order does. Section 1 Section 1 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION A-NATIONAL SHIPPING AUTHORITY GENERAL AGENT'S RESPONSIBILITY IN CONNECTION WITH FOREIGN REPAIR CUSTOM'S ENTRIES Section 1 What this order does. This order...
46 CFR Section 1 - What this order does.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 8 2010-10-01 2010-10-01 false What this order does. Section 1 Section 1 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION A-NATIONAL SHIPPING AUTHORITY GENERAL AGENT'S RESPONSIBILITY IN CONNECTION WITH FOREIGN REPAIR CUSTOM'S ENTRIES Section 1 What this order does. This order...
Chiu, Shih-Hau; Chen, Chien-Chi; Yuan, Gwo-Fang; Lin, Thy-Hou
2006-01-01
Background The number of sequences compiled in many genome projects is growing exponentially, but most of them have not been characterized experimentally. An automatic annotation scheme must be in an urgent need to reduce the gap between the amount of new sequences produced and reliable functional annotation. This work proposes rules for automatically classifying the fungus genes. The approach involves elucidating the enzyme classifying rule that is hidden in UniProt protein knowledgebase and then applying it for classification. The association algorithm, Apriori, is utilized to mine the relationship between the enzyme class and significant InterPro entries. The candidate rules are evaluated for their classificatory capacity. Results There were five datasets collected from the Swiss-Prot for establishing the annotation rules. These were treated as the training sets. The TrEMBL entries were treated as the testing set. A correct enzyme classification rate of 70% was obtained for the prokaryote datasets and a similar rate of about 80% was obtained for the eukaryote datasets. The fungus training dataset which lacks an enzyme class description was also used to evaluate the fungus candidate rules. A total of 88 out of 5085 test entries were matched with the fungus rule set. These were otherwise poorly annotated using their functional descriptions. Conclusion The feasibility of using the method presented here to classify enzyme classes based on the enzyme domain rules is evident. The rules may be also employed by the protein annotators in manual annotation or implemented in an automatic annotation flowchart. PMID:16776838
Duke, Jon D; Morea, Justin; Mamlin, Burke; Martin, Douglas K; Simonaitis, Linas; Takesue, Blaine Y; Dixon, Brian E; Dexter, Paul R
2014-03-01
Regenstrief Institute developed one of the seminal computerized order entry systems, the Medical Gopher, for implementation at Wishard Hospital nearly three decades ago. Wishard Hospital and Regenstrief remain committed to homegrown software development, and over the past 4 years we have fully rebuilt Gopher with an emphasis on usability, safety, leveraging open source technologies, and the advancement of biomedical informatics research. Our objective in this paper is to summarize the functionality of this new system and highlight its novel features. Applying a user-centered design process, the new Gopher was built upon a rich-internet application framework using an agile development process. The system incorporates order entry, clinical documentation, result viewing, decision support, and clinical workflow. We have customized its use for the outpatient, inpatient, and emergency department settings. The new Gopher is now in use by over 1100 users a day, including an average of 433 physicians caring for over 3600 patients daily. The system includes a wizard-like clinical workflow, dynamic multimedia alerts, and a familiar 'e-commerce'-based interface for order entry. Clinical documentation is enhanced by real-time natural language processing and data review is supported by a rapid chart search feature. As one of the few remaining academically developed order entry systems, the Gopher has been designed both to improve patient care and to support next-generation informatics research. It has achieved rapid adoption within our health system and suggests continued viability for homegrown systems in settings of close collaboration between developers and providers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE PEANUT PROMOTION, RESEARCH, AND INFORMATION ORDER Peanut Promotion, Research, and Information Order Reports, Books, and Records § 1216.60... following: (1) Number of pounds of peanuts produced or handled; (2) Price paid to producers (entry in value...
Dynamic Oligomerization of Integrase Orchestrates HIV Nuclear Entry.
Borrenberghs, Doortje; Dirix, Lieve; De Wit, Flore; Rocha, Susana; Blokken, Jolien; De Houwer, Stéphanie; Gijsbers, Rik; Christ, Frauke; Hofkens, Johan; Hendrix, Jelle; Debyser, Zeger
2016-11-10
Nuclear entry is a selective, dynamic process granting the HIV-1 pre-integration complex (PIC) access to the chromatin. Classical analysis of nuclear entry of heterogeneous viral particles only yields averaged information. We now have employed single-virus fluorescence methods to follow the fate of single viral pre-integration complexes (PICs) during infection by visualizing HIV-1 integrase (IN). Nuclear entry is associated with a reduction in the number of IN molecules in the complexes while the interaction with LEDGF/p75 enhances IN oligomerization in the nucleus. Addition of LEDGINs, small molecule inhibitors of the IN-LEDGF/p75 interaction, during virus production, prematurely stabilizes a higher-order IN multimeric state, resulting in stable IN multimers resistant to a reduction in IN content and defective for nuclear entry. This suggests that a stringent size restriction determines nuclear pore entry. Taken together, this work demonstrates the power of single-virus imaging providing crucial insights in HIV replication and enabling mechanism-of-action studies.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in Areas Designated by Order § 261.50 Orders. (a) The Chief, each Regional Forester, each Experiment... issue orders which close or restrict the use of described areas within the area over which he has jurisdiction. An order may close an area to entry or may restrict the use of an area by applying any or all of...
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
Wess, Mark L.; Embi, Peter J.; Besier, James L.; Lowry, Chad H.; Anderson, Paul F.; Besier, James C.; Thelen, Geriann; Hegner, Catherine
2007-01-01
Computerized Provider Order Entry (CPOE) has been demonstrated to improve the medication ordering process, but most published studies have been performed at academic hospitals. Little is known about the effects of CPOE at community hospitals. With a pre-post study design, we assessed the effects of a CPOE system on the medication ordering process at both a community and university hospital. The time from provider ordering to pharmacist verification decreased by two hours with CPOE at the community hospital (p<0.0001) and by one hour at the university hospital (p<0.0001). The rate of medication clarifications requiring signature was 2.80 percent pre-CPOE and 0.40 percent with CPOE (p<0.0001) at the community hospital. The university hospital was 2.76 percent pre-CPOE and 0.46 percent with CPOE (p<0.0001). CPOE improved medication order processing at both community and university hospitals. These findings add to the limited literature on CPOE in community hospitals. PMID:18693946
Mars Science Laboratory Navigation Results
NASA Technical Reports Server (NTRS)
Martin-Mur, Tomas J.; Kruizingas, Gerhard L.; Burkhart, P. Daniel; Wong, Mau C.; Abilleira, Fernando
2012-01-01
The Mars Science Laboratory (MSL), carrying the Curiosity rover to Mars, was launched on November 26, 2011, from Cape Canaveral, Florida. The target for MSL was selected to be Gale Crater, near the equator of Mars, with an arrival date in early August 2012. The two main interplanetary navigation tasks for the mission were to deliver the spacecraft to an entry interface point that would allow the rover to safely reach the landing area, and to tell the spacecraft where it entered the atmosphere of Mars, so it could guide itself accurately to close proximity of the landing target. MSL used entry guidance as it slowed down from the entry speed to a speed low enough to allow for a successful parachute deployment, and this guidance allowed shrinking the landing ellipse to a 99% conservative estimate of 7 by 20 kilometers. Since there is no global positioning system in Mars, achieving this accuracy was predicated on flying a trajectory that closely matched the reference trajectory used to design the guidance algorithm, and on initializing the guidance system with an accurate Mars-relative entry state that could be used as the starting point to integrate the inertial measurement unit data during entry and descent. The pre-launch entry flight path angle (EFPA) delivery requirement was +/- 0.20 deg, but after launch a smaller threshold of +/- 0.05 deg was used as the criteria for late trajectory correction maneuver (TCM) decisions. The pre-launch requirement for entry state knowledge was 2.8 kilometers in position error and 2 meters per second in velocity error, but also smaller thresholds were defined after launch to evaluate entry state update opportunities. The biggest challenge for the navigation team was to accurately predict the trajectory of the spacecraft, so the estimates of the entry conditions could be stable, and late trajectory correction maneuvers or entry parameter updates could be waved off. As a matter of fact, the prediction accuracy was such that the last TCM performed was a small burn executed eight days before landing, and the entry state that was calculated just 36 hours after that TCM, and that was uploaded to the spacecraft the same day, did not need to be updated. The final EFPA was 0.013 deg shallower than the -15.5 deg target, and the on-board entry state was just 200 meters in position and 0.11 meters per second in velocity from the post-landing reconstructed entry state. Overall the entry delivery and knowledge requirements were fulfilled with a margin of more than 90% with respect to the pre-launch thresholds. This excellent accuracy contributed to a very successful and accurate entry, descent, and landing, and surface mission.
Reconstruction of finite-valued sparse signals
NASA Astrophysics Data System (ADS)
Keiper, Sandra; Kutyniok, Gitta; Lee, Dae Gwan; Pfander, Götz
2017-08-01
The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.
Weighted re-randomization tests for minimization with unbalanced allocation.
Han, Baoguang; Yu, Menggang; McEntegart, Damian
2013-01-01
Re-randomization test has been considered as a robust alternative to the traditional population model-based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate-adaptive randomization method for ensuring balance among prognostic factors. Among various re-randomization tests, fixed-entry-order re-randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed-entry-order re-randomization test is biased and thus compromised in power. We find that the bias is due to non-uniform re-allocation probabilities incurred by the re-randomization in this case. We therefore propose a weighted fixed-entry-order re-randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re-randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.
19 CFR Appendix to 19 Cfr Part 0 - Treasury Department Order No. 100-16
Code of Federal Regulations, 2010 CFR
2010-04-01
... completion of entry or substance of entry summary including duty assessment and collection, classification... the Committee on Ways and Means and the Chairman and Ranking Member of the Committee on Finance every... Ranking Member of the Committee on Finance every six months. The Secretary of the Treasury shall list any...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-17
..., location, and entry under the general land laws, including the United States mining laws, for a period of... Training Facility. This withdrawal also transfers administrative jurisdiction of the lands to the... entry under the general land laws, including the United States mining laws, but not from leasing under...
NASA Technical Reports Server (NTRS)
Hamilton, M.
1973-01-01
The entry guidance software functional requirements (requirements design phase), its architectural requirements (specifications design phase), and the entry guidance software verified code are discussed. It was found that the proper integration of designs at both the requirements and specifications levels are of high priority consideration.
Chinese-English 2,000 Selected Chinese Common Sayings (Yale Romanization).
ERIC Educational Resources Information Center
Wu, C.K.; Wu, K.S.
Compiled here for the first time in Yale romanization are 2,000 common Chinese sayings, idioms, proverbs, and other figures of speech. The entries are arranged in two series: once in alphabetic order according to the Yale romanization and then again by the stroke-count of the Chinese characters. The romanized entries are accompanied by several…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... Standard Pipe and Tube From Turkey: Intent To Rescind Countervailing Duty Administrative Review, in Part... certain welded carbon steel pipe and tube from Turkey. See Antidumping or Countervailing Duty Order... Certain Welded Carbon Steel Standard Pipe from Turkey,'' (October 27, 2011). A Type 3 entry is an entry of...
A survey of an introduction to fault diagnosis algorithms
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
This report surveys the field of diagnosis and introduces some of the key algorithms and heuristics currently in use. Fault diagnosis is an important and a rapidly growing discipline. This is important in the design of self-repairable computers because the present diagnosis resolution of its fault-tolerant computer is limited to a functional unit or processor. Better resolution is necessary before failed units can become partially reuseable. The approach that holds the greatest promise is that of resident microdiagnostics; however, that presupposes a microprogrammable architecture for the computer being self-diagnosed. The presentation is tutorial and contains examples. An extensive bibliography of some 220 entries is included.
1998-01-24
the Apparel Manufacturing Architecture (AMA), a generic architecture for an apparel enterprise. ARN-AIMS consists of three modules - Order Processing , Order...Tracking and Shipping & Invoicing. The Order Processing Module is designed to facilitate the entry of customer orders for stock and special
NASA Astrophysics Data System (ADS)
Liu, WenXiang; Mou, WeiHua; Wang, FeiXue
2012-03-01
As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Impact of providing fee data on laboratory test ordering: a controlled clinical trial.
Feldman, Leonard S; Shihab, Hasan M; Thiemann, David; Yeh, Hsin-Chieh; Ardolino, Margaret; Mandell, Steven; Brotman, Daniel J
2013-05-27
Inpatient care providers often order laboratory tests without any appreciation for the costs of the tests. To determine whether we could decrease the number of laboratory tests ordered by presenting providers with test fees at the time of order entry in a tertiary care hospital, without adding extra steps to the ordering process. Controlled clinical trial. Tertiary care hospital. All providers, including physicians and nonphysicians, who ordered laboratory tests through the computerized provider order entry system at The Johns Hopkins Hospital. We randomly assigned 61 diagnostic laboratory tests to an "active" arm (fee displayed) or to a control arm (fee not displayed). During a 6-month baseline period (November 10, 2008, through May 9, 2009), we did not display any fee data. During a 6-month intervention period 1 year later (November 10, 2009, through May 9, 2010), we displayed fees, based on the Medicare allowable fee, for active tests only. We examined changes in the total number of orders placed, the frequency of ordered tests (per patient-day), and total charges associated with the orders according to the time period (baseline vs intervention period) and by study group (active test vs control). For the active arm tests, rates of test ordering were reduced from 3.72 tests per patient-day in the baseline period to 3.40 tests per patient-day in the intervention period (8.59% decrease; 95% CI, -8.99% to -8.19%). For control arm tests, ordering increased from 1.15 to 1.22 tests per patient-day from the baseline period to the intervention period (5.64% increase; 95% CI, 4.90% to 6.39%) (P < .001 for difference over time between active and control tests). Presenting fee data to providers at the time of order entry resulted in a modest decrease in test ordering. Adoption of this intervention may reduce the number of inappropriately ordered diagnostic tests.
NASA Technical Reports Server (NTRS)
Pflaum, Christoph
1996-01-01
A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.
Low temperature simulation of subliming boundary layer flow in Jupiter atmosphere
NASA Technical Reports Server (NTRS)
Chen, C. J.
1976-01-01
A low-temperature approximate simulation for the sublimation of a graphite heat shield under Jovian entry conditions is studied. A set of algebraic equations is derived to approximate the governing equation and boundary conditions, based on order-of-magnitude analysis. Characteristic quantities such as the wall temperature and the subliming velocity are predicted. Similarity parameters that are needed to simulate the most dominant phenomena of the Jovian entry flow are also given. An approximate simulation of the sublimation of the graphite heat shield is performed with an air-dry-ice model. The simulation with the air-dry-ice model may be carried out experimentally at a lower temperature of 3000 to 6000 K instead of the entry temperature of 14,000 K. The rate of graphite sublimation predicted by the present algebraic approximation agrees to the order of magnitude with extrapolated data. The limitations of the simulation method and its utility are discussed.
NASA Technical Reports Server (NTRS)
Stackpoole, Margaret M.; Ellerby, Donald T.; Gasch, Matt; Ventkatapathy, Ethiraj; Beerman, Adam; Boghozian, Tane; Gonzales, Gregory; Feldman, Jay; Peterson, Keith; Prabhu, Dinesh
2014-01-01
NASA's future robotic missions to Venus and other planets, namely, Saturn, Uranus, Neptune, result in extremely high entry conditions that exceed the capabilities of current mid density ablators (PICA or Avcoat). Therefore mission planners assume the use of a fully dense carbon phenolic heatshield similar to what was flown on Pioneer Venus and Galileo. Carbon phenolic is a robust TPS, however, its high density and thermal conductivity constrain mission planners to steep entries, high fluxes, pressures and short entry durations, in order for CP to be feasible from a mass perspective. The high entry conditions pose certification challenges in existing ground based test facilities. In 2012 the Game Changing Development Program in NASA's Space Technology Mission Directorate funded NASA ARC to investigate the feasibility of a Woven Thermal Protection System to meet the needs of NASA's most challenging entry missions. This presentation will summarize the maturation of the WTPS project.
Human Mars Lander Design for NASA's Evolvable Mars Campaign
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Chapman, Jack; Sutherlin, Steve; Taylor, Brian; Fabisinski, Leo; Collins, Tim; Cianciolo Dwyer, Alicia; Samareh, Jamshid; Robertson, Ed; Studak, Bill;
2016-01-01
Landing humans on Mars will require entry, descent, and landing capability beyond the current state of the art. Nearly twenty times more delivered payload and an order of magnitude improvement in precision landing capability will be necessary. To better assess entry, descent, and landing technology options and sensitivities to future human mission design variations, a series of design studies on human-class Mars landers has been initiated. This paper describes the results of the first design study in the series of studies to be completed in 2016 and includes configuration, trajectory and subsystem design details for a lander with Hypersonic Inflatable Aerodynamic Decelerator (HIAD) entry technology. Future design activities in this series will focus on other entry technology options.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl
2009-02-01
An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less
Schadow, Gunther
2005-01-01
Prescribing errors are an important cause of adverse events, and lack of knowledge of the drug is a root cause for prescribing errors. The FDA is issuing new regulations that will make the drug labels much more useful not only to physicians, but also to computerized order entry systems that support physicians to practice safe prescribing. For this purpose, FDA works with HL7 to create the Structured Product Label (SPL) standard that includes a document format as well as a drug knowledge representation, this poster introduces the basic concepts of SPL.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Liechty, Derek S.
2008-01-01
The influence of cavities (for attachment bolts) on the heat-shield of the proposed Mars Science Laboratory entry vehicle has been investigated experimentally and computationally in order to develop a criterion for assessing whether the boundary layer becomes turbulent downstream of the cavity. Wind tunnel tests were conducted on the 70-deg sphere-cone vehicle geometry with various cavity sizes and locations in order to assess their influence on convective heating and boundary layer transition. Heat-transfer coefficients and boundary-layer states (laminar, transitional, or turbulent) were determined using global phosphor thermography.
Clinicians' views on displaying cost information to increase clinician cost-consciousness.
Kruger, Jenna F; Chen, Alice Hm; Rybkin, Alex; Leeds, Kiren; Frosch, Dominick L; Goldman, Elizabeth
2014-01-01
To evaluate 1) clinician attitudes towards incorporating cost information into decision making when ordering imaging studies; and 2) clinician reactions to the display of Medicare reimbursement information for imaging studies at clinician electronic order entry. Focus group study with inductive thematic analysis. We conducted focus groups of primary care clinicians and subspecialty physicians (nephrology, pulmonary, and neurology) (N = 50) who deliver outpatient care in 12 hospital-based clinics and community health centers in an urban safety net health system. We analyzed focus group transcripts using an inductive framework to identify emergent themes and illustrative quotations. Clinicians believed that their knowledge of healthcare costs was low and wanted access to relevant cost information for reference. However, many clinicians believed it was inappropriate and unethical to consider costs in individual patient care decisions. Among clinicians' negative reactions toward displaying costs at order entry, 4 underlying themes emerged: 1) belief that ordering is already limited to clinically necessary tests; 2) importance of prioritizing responsibility to patients above that to the healthcare system; 3) concern about worsening healthcare disparities; and 4) perceived lack of accountability for healthcare costs in the system. Although clinicians want relevant cost information, many voiced concerns about displaying cost information at clinician order entry in safety net health systems. Alternative approaches to increasing cost-consciousness may be more acceptable to clinicians.
Families of Graph Algorithms: SSSP Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-08-28
Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less
Mominah, Maher; Yunus, Faisel; Househ, Mowafa S
2013-01-01
Computerized provider order entry (CPOE) is a health informatics system that helps health care providers create and manage orders for medications and other health care services. Through the automation of the ordering process, CPOE has improved the overall efficiency of hospital processes and workflow. In Saudi Arabia, CPOE has been used for years, with only a few studies evaluating the impacts of CPOE on clinical workflow. In this paper, we discuss the experience of a local hospital with the use of CPOE and its impacts on clinical workflow. Results show that there are many issues related to the implementation and use of CPOE within Saudi Arabia that must be addressed, including design, training, medication errors, alert fatigue, and system dep Recommendations for improving CPOE use within Saudi Arabia are also discussed.
Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kohen, Hamid
1997-01-01
This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2010-01-01
Challenges to computational aerothermodynamic (CA) simulation and validation of hypersonic flow over planetary entry vehicles are discussed. Entry, descent, and landing (EDL) of high mass to Mars is a significant driver of new simulation requirements. These requirements include simulation of large deployable, flexible structures and interactions with reaction control system (RCS) and retro-thruster jets. Simulation of radiation and ablation coupled to the flow solver continues to be a high priority for planetary entry analyses, especially for return to Earth and outer planet missions. Three research areas addressing these challenges are emphasized. The first addresses the need to obtain accurate heating on unstructured tetrahedral grid systems to take advantage of flexibility in grid generation and grid adaptation. A multi-dimensional inviscid flux reconstruction algorithm is defined that is oriented with local flow topology as opposed to grid. The second addresses coupling of radiation and ablation to the hypersonic flow solver - flight- and ground-based data are used to provide limited validation of these multi-physics simulations. The third addresses the challenges of retro-propulsion simulation and the criticality of grid adaptation in this application. The evolution of CA to become a tool for innovation of EDL systems requires a successful resolution of these challenges.
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
Description and performance analysis of a generalized optimal algorithm for aerobraking guidance
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Dukeman, Greg A.
1993-01-01
A practical real-time guidance algorithm has been developed for aerobraking vehicles which nearly minimizes the maximum heating rate, the maximum structural loads, and the post-aeropass delta V requirement for orbit insertion. The algorithm is general and reusable in the sense that a minimum of assumptions are made, thus greatly reducing the number of parameters that must be determined prior to a given mission. A particularly interesting feature is that in-plane guidance performance is tuned by adjusting one mission-dependent, the bank margin; similarly, the out-of-plane guidance performance is tuned by adjusting a plane controller time constant. Other features of the algorithm are simplicity, efficiency and ease of use. The trimmed vehicle with bank angle modulation as the method of trajectory control. Performance of this guidance algorithm is examined by its use in an aerobraking testbed program. The performance inquiry extends to a wide range of entry speeds covering a number of potential mission applications. Favorable results have been obtained with a minimum of development effort, and directions for improvement of performance are indicated.
Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B
2017-08-01
Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Kilgore, Mark R; McIlwain, Carrie A; Schmidt, Rodney A; Norquist, Barbara M; Swisher, Elizabeth M; Garcia, Rochelle L; Rendi, Mara H
2016-01-01
Endometrial carcinoma (EC) is the most common extracolonic malignant neoplasm associated with Lynch syndrome (LS). LS is caused by autosomal dominant germline mutations in DNA mismatch repair (MMR) genes. Screening for LS in EC is often evaluated by loss of immunohistochemical (IHC) expression of DNA MMR enzymes MLH1, MSH2, MSH6, and PMS2 (MMR IHC). In July 2013, our clinicians asked that we screen all EC in patients ≤60 for loss of MMR IHC expression. Despite this policy, several cases were not screened or screening was delayed. We implemented an informatics-based approach to ensure that all women who met criteria would have timely screening. Reports are created in PowerPath (Sunquest Information Systems, Tucson, AZ) with custom synoptic templates. We implemented an algorithm on March 6, 2014 requiring pathologists to address MMR IHC in patients ≤60 with EC before sign out (S/O). Pathologists must answer these questions: is patient ≤60 (yes/no), if yes, follow-up questions (IHC done previously, ordered with addendum to follow, results included in report, N/A, or not ordered), if not ordered, one must explain. We analyzed cases from July 18, 2013 to August 31, 2016 preimplementation (PreImp) and postimplementation (PostImp) that met criteria. Data analysis was performed using the standard data package included with GraphPad Prism ® 7.00 (GraphPad Software, Inc., La Jolla, CA, USA). There were 147 patients who met criteria (29 PreImp and 118 PostImp). IHC was ordered in a more complete and timely fashion PostImp than PreImp. PreImp, 4/29 (13.8%) cases did not get any IHC, but PostImp, only 4/118 (3.39%) were missed ( P = 0.0448). Of cases with IHC ordered, 60.0% (15/25) were ordered before or at S/O PreImp versus 91.2% (104/114) PostImp ( P = 0.0004). Relative to day of S/O, the mean days of order delay were longer and more variable PreImp versus PostImp (12.9 ± 40.7 vs. -0.660 ± 1.15; P = 0.0227), with the average being before S/O PostImp. This algorithm ensures MMR IHC ordering in women ≤60 with EC and can be applied to similar scenarios. Ancillary tests for management are increasing, especially genetic and molecular-based methods. The burden of managing orders and results remains with the pathologist and relying on human intervention alone is ineffective. Ordering IHC before or at S/O prevents oversight and the additional work of retrospective ordering and reporting.
Schreiber, Richard; Sittig, Dean F; Ash, Joan; Wright, Adam
2017-09-01
In this report, we describe 2 instances in which expert use of an electronic health record (EHR) system interfaced to an external clinical laboratory information system led to unintended consequences wherein 2 patients failed to have laboratory tests drawn in a timely manner. In both events, user actions combined with the lack of an acknowledgment message describing the order cancellation from the external clinical system were the root causes. In 1 case, rapid, near-simultaneous order entry was the culprit; in the second, astute order management by a clinician, unaware of the lack of proper 2-way interface messaging from the external clinical system, led to the confusion. Although testing had shown that the laboratory system would cancel duplicate laboratory orders, it was thought that duplicate alerting in the new order entry system would prevent such events. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less
Deducing chemical structure from crystallographically determined atomic coordinates
Bruno, Ian J.; Shields, Gregory P.; Taylor, Robin
2011-01-01
An improved algorithm has been developed for assigning chemical structures to incoming entries to the Cambridge Structural Database, using only the information available in the deposited CIF. Steps in the algorithm include detection of bonds, selection of polymer unit, resolution of disorder, and assignment of bond types and formal charges. The chief difficulty is posed by the large number of metallo-organic crystal structures that must be processed, given our aspiration that assigned chemical structures should accurately reflect properties such as the oxidation states of metals and redox-active ligands, metal coordination numbers and hapticities, and the aromaticity or otherwise of metal ligands. Other complications arise from disorder, especially when it is symmetry imposed or modelled with the SQUEEZE algorithm. Each assigned structure is accompanied by an estimate of reliability and, where necessary, diagnostic information indicating probable points of error. Although the algorithm was written to aid building of the Cambridge Structural Database, it has the potential to develop into a general-purpose tool for adding chemical information to newly determined crystal structures. PMID:21775812
NASA Technical Reports Server (NTRS)
Pfister, Robin; McMahon, Joe
2006-01-01
Power User Interface 5.0 (PUI) is a system of middleware, written for expert users in the Earth-science community, PUI enables expedited ordering of data granules on the basis of specific granule-identifying information that the users already know or can assemble. PUI also enables expert users to perform quick searches for orderablegranule information for use in preparing orders. PUI 5.0 is available in two versions (note: PUI 6.0 has command-line mode only): a Web-based application program and a UNIX command-line- mode client program. Both versions include modules that perform data-granule-ordering functions in conjunction with external systems. The Web-based version works with Earth Observing System Clearing House (ECHO) metadata catalog and order-entry services and with an open-source order-service broker server component, called the Mercury Shopping Cart, that is provided separately by Oak Ridge National Laboratory through the Department of Energy. The command-line version works with the ECHO metadata and order-entry process service. Both versions of PUI ultimately use ECHO to process an order to be sent to a data provider. Ordered data are provided through means outside the PUI software system.
Entry, Descent and Landing Systems Analysis: Exploration Class Simulation Overview and Results
NASA Technical Reports Server (NTRS)
DwyerCianciolo, Alicia M.; Davis, Jody L.; Shidner, Jeremy D.; Powell, Richard W.
2010-01-01
NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and exploration or human-scale missions. The year one exploration class mission activity considered technologies capable of delivering a 40-mt payload. This paper provides an overview of the exploration class mission study, including technologies considered, models developed and initial simulation results from the EDL-SA year one effort.
On Graph Isomorphism and the PageRank Algorithm
2008-09-01
specifies the probability of visiting each node from any other node. The perturbed matrix satisfies the Perron - Frobenius theorem’s conditions. Therefore... Frobenius and Perron theorems establishes the matrix must yield the dominant eigenvalue, one. Normalizing the unique and associated dominant eigenvector...is constructed such that none of its entries equal zero. An arbitrary PageRank matrix, S, is irreducible and satisfies the Perron - Frobenius
Lin, Yuh-Feng; Sheng, Li-Huei; Wu, Mei-Yi; Zheng, Cai-Mei; Chang, Tian-Jong; Li, Yu-Chuan; Huang, Yu-Hui; Lu, Hsi-Peng
2014-12-01
No evidence exists from randomized trials to support using cloud-based manometers integrated with available physician order entry systems for tracking patient blood pressure (BP) to assist in the control of renal function deterioration. We investigated how integrating cloud-based manometers with physician order entry systems benefits our outpatient chronic kidney disease patients compared with typical BP tracking systems. We randomly assigned 36 chronic kidney disease patients to use cloud-based manometers integrated with physician order entry systems or typical BP recording sheets, and followed the patients for 6 months. The composite outcome was that the patients saw improvement both in BP and renal function. We compared the systolic and diastolic BP (SBP and DBP), and renal function of our patients at 0 months, 3 months, and 6 months after using the integrated manometers and typical BP monitoring sheets. Nighttime SBP and DBP were significantly lower in the study group compared with the control group. Serum creatinine level in the study group improved significantly compared with the control group after the end of Month 6 (2.83 ± 2.0 vs. 4.38 ± 3.0, p = 0.018). Proteinuria improved nonsignificantly in Month 6 in the study group compared with the control group (1.05 ± 0.9 vs. 1.90 ± 1.3, p = 0.09). Both SBP and DBP during the nighttime hours improved significantly in the study group compared with the baseline. In pre-end-stage renal disease patients, regularly monitoring BP by integrating cloud-based manometers appears to result in a significant decrease in creatinine and improvement in nighttime BP control. Estimated glomerular filtration rate and proteinuria were found to be improved nonsignificantly, and thus, larger population and longer follow-up studies may be needed.
Claxton, Karl; Palmer, Stephen; Longworth, Louise; Bojke, Laura; Griffin, Susan; Soares, Marta; Spackman, Eldon; Rothery, Claire
The value of evidence about the performance of a technology and the value of access to a technology are central to policy decisions regarding coverage with, without, or only in research and managed entry (or risk-sharing) agreements. We aim to outline the key principles of what assessments are needed to inform "only in research" (OIR) or "approval with research" (AWR) recommendations, in addition to approval or rejection. We developed a comprehensive algorithm to inform the sequence of assessments and judgments that lead to different types of guidance: OIR, AWR, Approve, or Reject. This algorithm identifies the order in which assessments might be made, how similar guidance might be arrived at through different combinations of considerations, and when guidance might change. The key principles are whether the technology is expected to be cost-effective; whether the technology has significant irrecoverable costs; whether additional research is needed; whether research is possible with approval and whether there are opportunity costs that once committed by approval cannot be recovered; and whether there are effective price reductions. Determining expected cost-effectiveness is only a first step. In addition to AWR for technologies expected to be cost-effective and OIR for those not expected to be cost-effective, there are other important circumstances when OIR should be considered. These principles demonstrate that cost-effectiveness is a necessary but not sufficient condition for approval. Even when research is possible with approval, OIR may be appropriate when a technology is expected to be cost-effective due to significant irrecoverable costs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
Multiscale high-order/low-order (HOLO) algorithms and applications
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
Application of a Fully Numerical Guidance to Mars Aerocapture
NASA Technical Reports Server (NTRS)
Matz, Daniel A.; Lu, Ping; Mendeck, Gavin F.; Sostaric, Ronald R.
2017-01-01
An advanced guidance algorithm, Fully Numerical Predictor-corrector Aerocapture Guidance (FNPAG), has been developed to perform aerocapture maneuvers in an optimal manner. It is a model-based, numerical guidance that benefits from requiring few adjustments across a variety of different hypersonic vehicle lift-to-drag ratios, ballistic co-efficients, and atmospheric entry conditions. In this paper, FNPAG is first applied to the Mars Rigid Vehicle (MRV) mid lift-to-drag ratio concept. Then the study is generalized to a design map of potential Mars aerocapture missions and vehicles, ranging from the scale and requirements of recent robotic to potential human and precursor missions. The design map results show the versatility of FNPAG and provide insight for the design of Mars aerocapture vehicles and atmospheric entry conditions to achieve desired performance.
Ordered Backward XPath Axis Processing against XML Streams
NASA Astrophysics Data System (ADS)
Nizar M., Abdul; Kumar, P. Sreenivasa
Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.
Beckwith, Helen K.
1970-01-01
A study was made of the serial holding statements in PHILSOM over a six-month period, in order to determine the desirability of printing the complete serial holding statements monthly. Attention was given to the frequency of internal and update changes in both active and dead entries. The results indicate that while sufficient activity is observed in active serial entries to warrant their monthly updating, dead serial entries remain constant over this period. This indicates that a large group of PHILSOM entries can be easily identified and isolated, facilitating division and independent updating of the resultant lists. The desirability of such a division, however, must also take into consideration the user's ease in handling such a segmented listing. Images PMID:5439902
Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A
2015-10-26
Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.
Method for data compression by associating complex numbers with files of data values
Feo, J.T.; Hanks, D.C.; Kraay, T.A.
1998-02-10
A method for compressing data for storage or transmission is disclosed. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file. 4 figs.
Method for data compression by associating complex numbers with files of data values
Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur
1998-02-10
A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.
Guidance, Navigation, and Control Techniques and Technologies for Active Satellite Removal
NASA Astrophysics Data System (ADS)
Ortega Hernando, Guillermo; Erb, Sven; Cropp, Alexander; Voirin, Thomas; Dubois-Matra, Olivier; Rinalducci, Antonio; Visentin, Gianfranco; Innocenti, Luisa; Raposo, Ana
2013-09-01
This paper shows an internal feasibility analysis to de- orbit a non-functional satellite of big dimensions by the Technical Directorate of the European Space Agency ESA. The paper focuses specifically on the design of the techniques and technologies for the Guidance, Navigation, and Control (GNC) system of the spacecraft mission that will capture the satellite and ultimately will de-orbit it on a controlled re-entry.The paper explains the guidance strategies to launch, rendezvous, close-approach, and capture the target satellite. The guidance strategy uses chaser manoeuvres, hold points, and collision avoidance trajectories to ensure a safe capture. It also details the guidance profile to de-orbit it in a controlled re-entry.The paper continues with an analysis of the required sensing suite and the navigation algorithms to allow the homing, fly-around, and capture of the target satellite. The emphasis is placed around the design of a system to allow the rendezvous with an un-cooperative target, including the autonomous acquisition of both the orbital elements and the attitude of the target satellite.Analysing the capture phase, the paper provides a trade- off between two selected capture systems: the net and the tentacles. Both are studied from the point of view of the GNC system.The paper analyses as well the advanced algorithms proposed to control the final compound after the capture that will allow the controlled de-orbiting of the assembly in a safe place in the Earth.The paper ends proposing the continuation of this work with the extension to the analysis of the destruction process of the compound in consecutive segments starting from the entry gate to the rupture and break up.
Influence of local capillary trapping on containment system effectiveness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Steven
2014-03-31
Immobilization of CO 2 injected into deep subsurface storage reservoirs is a critical component of risk assessment for geologic CO 2 storage (GCS). Local capillary trapping (LCT) is a recently established mode of immobilization that arises when CO 2 migrates due to buoyancy through heterogeneous storage reservoirs. This project sought to assess the amount and extent of LCT expected in storage formations under a range of injection conditions, and to confirm the persistence of LCT if the seal overlying the reservoir were to lose its integrity. Numerical simulation using commercial reservoir simulation software was conducted to assess the influence ofmore » injection. Laboratory experiments, modeling and numerical simulation were conducted to assess the effect of compromised seal integrity. Bench-scale (0.6 m by 0.6 m by 0.03 m) experiments with surrogate fluids provided the first empirical confirmation of the key concepts underlying LCT: accumulation of buoyant nonwetting phase at above residual saturations beneath capillary barriers in a variety of structures, which remains immobile under normal capillary pressure gradients. Immobilization of above-residual saturations is a critical distinction between LCT and the more familiar “residual saturation trapping.” To estimate the possible extent of LCT in a storage reservoir an algorithm was developed to identify all potential local traps, given the spatial distribution of capillary entry pressure in the reservoir. The algorithm assumes that the driving force for CO 2 migration can be represented as a single value of “critical capillary entry pressure” P c,entry crit, such that cells with capillary entry pressure greater/less than P c,entry crit act as barriers/potential traps during CO 2 migration. At intermediate values of P c,entry crit, the barrier regions become more laterally extensive in the reservoir, approaching a percolation threshold while non-barrier regions remain numerous. The maximum possible extent of LCT thus occurs at P c,entry crit near this threshold. Testing predictions of this simple algorithm against full-physics simulations of buoyancy-driven CO 2 migration support the concept of critical capillary entry pressure. However, further research is needed to determine whether a single value of critical capillary entry pressure always applies and how that value can be determined a priori. Simulations of injection into high-resolution (cells 0.3 m on a side) 2D and 3D heterogeneous domains show two characteristic behaviors. At small gravity numbers (vertical flow velocity much less than horizontal flow velocity) the CO 2 fills local traps as well as regions that would act as local barriers if CO 2 were moving only due to buoyancy. When injection ceases, the CO 2 migrates vertically to establish large saturations within local traps and residual saturation elsewhere. At large gravity numbers, the CO 2 invades a smaller portion of the perforated interval. Within this smaller swept zone the local barriers are not invaded, but local traps are filled to large saturation during injection and remain during post-injection gravity-driven migration. The small gravity number behavior is expected in the region within 100 m of a vertical injection well at anticipated rates of injection for commercial GCS. Simulations of leakage scenarios (through-going region of large permeability imposed in overlying seal) indicate that LCT persists (i.e. CO 2 remains held in a large fraction of the local iv traps) and the persistence is independent of injection rate during storage. Simulations of leakage for the limiting case of CO 2 migrating vertically from an areally extensive emplacement in the lower portion of a reservoir showed similar strong persistence of LCT. This research has two broad implications for GCS. The first is that LCT can retain a significant fraction of the CO 2 stored in a reservoir – above and beyond the residual saturation -- if the overlying seal were to fail. Thus frameworks for risk assessment should be extended to account for LCT. The second implication is that compared to pressure driven flow in reservoirs, CO 2 migration and trapping behave in a qualitatively different manner in heterogeneous reservoirs when buoyancy is the dominant driving force for flow. Thus simulations of GCS that neglect capillary heterogeneity will fail to capture important features of the CO 2 plume. While commercial reservoir simulation software can account for fine scale capillary heterogeneity, it has not been designed to work efficiently with such domains, and no simulators can handle fine-scale resolution throughout the reservoir. A possible way to upscale the migration and trapping is to apply an “effective residual saturation” to coarse-scale grids. While the extent of overall immobilization can be correlated in this way, all coarser grids failed to capture the distance traveled by the migrating CO 2 for large gravity number. Thus it remains unclear how best to account for LCT in the routine simulation work-flow that will be needed for large-scale GCS. Alternatives meriting investigation include streamline methods, reduced-physics proxies (e.g. particle tracking), and biased invasion percolation algorithms, which are based on precisely the capillary heterogeneity essential for LCT.« less
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.
Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben
2017-08-02
It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudraswami, N. G.; Prasad, M. Shyam; Dey, S.
Most dust-sized cosmic particles undergo ablation and chemical alteration during atmospheric entry, which alters their original properties. A comprehensive understanding of this process is essential in order to decipher their pre-entry characteristics. The purpose of the study is to illustrate the process of vaporization of different elements for various entry parameters. The numerical results for particles of various sizes and various zenith angles are treated in order to understand the changes in chemical composition that the particles undergo as they enter the atmosphere. Particles with large sizes (> few hundred μ m) and high entry velocities (>16 km s{sup −1})more » experience less time at peak temperatures compared to those that have lower velocities. Model calculations suggest that particles can survive with an entry velocity of 11 km s{sup −1} and zenith angles (ZA) of 30°–90°, which accounts for ∼66% of the region where particles retain their identities. Our results suggest that the changes in chemical composition of MgO, SiO{sub 2}, and FeO are not significant for an entry velocity of 11 km s{sup −1} and sizes <300 μ m, but the changes in these compositions become significant beyond this size, where FeO is lost to a major extent. However, at 16 km s{sup −1} the changes in MgO, SiO{sub 2}, and FeO are very intense, which is also reflected in Mg/Si, Fe/Si, Ca/Si, and Al/Si ratios, even for particles with a size of 100 μ m. Beyond 400 μ m particle sizes at 16 km s{sup −1}, most of the major elements are vaporized, leaving the refractory elements, Al and Ca, suspended in the troposphere.« less
17 CFR 201.510 - Temporary cease-and-desist orders: Application process.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Temporary cease-and-desist orders: Application process. 201.510 Section 201.510 Commodity and Securities Exchanges SECURITIES AND... § 201.510 Temporary cease-and-desist orders: Application process. (a) Procedure. A request for entry of...
17 CFR 201.510 - Temporary cease-and-desist orders: Application process.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Temporary cease-and-desist orders: Application process. 201.510 Section 201.510 Commodity and Securities Exchanges SECURITIES AND... § 201.510 Temporary cease-and-desist orders: Application process. (a) Procedure. A request for entry of...
17 CFR 201.510 - Temporary cease-and-desist orders: Application process.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Temporary cease-and-desist orders: Application process. 201.510 Section 201.510 Commodity and Securities Exchanges SECURITIES AND... § 201.510 Temporary cease-and-desist orders: Application process. (a) Procedure. A request for entry of...
Odukoya, Jonathan A; Popoola, Segun I; Atayero, Aderemi A; Omole, David O; Badejo, Joke A; John, Temitope M; Olowo, Olalekan O
2018-04-01
In Nigerian universities, enrolment into any engineering undergraduate program requires that the minimum entry criteria established by the National Universities Commission (NUC) must be satisfied. Candidates seeking admission to study engineering discipline must have reached a predetermined entry age and met the cut-off marks set for Senior School Certificate Examination (SSCE), Unified Tertiary Matriculation Examination (UTME), and the post-UTME screening. However, limited effort has been made to show that these entry requirements eventually guarantee successful academic performance in engineering programs because the data required for such validation are not readily available. In this data article, a comprehensive dataset for empirical evaluation of entry requirements into engineering undergraduate programs in a Nigerian university is presented and carefully analyzed. A total sample of 1445 undergraduates that were admitted between 2005 and 2009 to study Chemical Engineering (CHE), Civil Engineering (CVE), Computer Engineering (CEN), Electrical and Electronics Engineering (EEE), Information and Communication Engineering (ICE), Mechanical Engineering (MEE), and Petroleum Engineering (PET) at Covenant University, Nigeria were randomly selected. Entry age, SSCE aggregate, UTME score, Covenant University Scholastic Aptitude Screening (CUSAS) score, and the Cumulative Grade Point Average (CGPA) of the undergraduates were obtained from the Student Records and Academic Affairs unit. In order to facilitate evidence-based evaluation, the robust dataset is made publicly available in a Microsoft Excel spreadsheet file. On yearly basis, first-order descriptive statistics of the dataset are presented in tables. Box plot representations, frequency distribution plots, and scatter plots of the dataset are provided to enrich its value. Furthermore, correlation and linear regression analyses are performed to understand the relationship between the entry requirements and the corresponding academic performance in engineering programs. The data provided in this article will help Nigerian universities, the NUC, engineering regulatory bodies, and relevant stakeholders to objectively evaluate and subsequently improve the quality of engineering education in the country.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... types and indications that are eligible for entry to and accepted by the Matching System. The Exchange... Exchange with the ability to determine on an order type by order type basis which orders and indications... Rule 43.2 relating to the types of orders handled on the CBOE's Screen Based Trading System (``SBT...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
.... Description of the Proposal The purpose of the proposal is to amend two subsections of Exchange Rule 1080 to allow entry of day limit orders for the proprietary accounts of SQTs and RSQTs. Current Rule 1080 (Phlx....\\4\\ Rule 1080 states that it governs the orders, execution reports and administrative order messages...
A Cross-site Qualitative Study of Physician Order Entry
Ash, Joan S.; Gorman, Paul N.; Lavelle, Mary; Payne, Thomas H.; Massaro, Thomas A.; Frantz, Gerri L.; Lyman, Jason A.
2003-01-01
Objective: To describe the perceptions of diverse professionals involved in computerized physician order entry (POE) at sites where POE has been successfully implemented and to identify differences between teaching and nonteaching hospitals. Design: A multidisciplinary team used observation, focus groups, and interviews with clinical, administrative, and information technology staff to gather data at three sites. Field notes and transcripts were coded using an inductive approach to identify patterns and themes in the data. Measurements: Patterns and themes concerning perceptions of POE were identified. Results: Four high-level themes were identified: (1) organizational issues such as collaboration, pride, culture, power, politics, and control; (2) clinical and professional issues involving adaptation to local practices, preferences, and policies; (3) technical/implementation issues, including usability, time, training and support; and (4) issues related to the organization of information and knowledge, such as system rigidity and integration. Relevant differences between teaching and nonteaching hospitals include extent of collaboration, staff longevity, and organizational missions. Conclusion: An organizational culture characterized by collaboration and trust and an ongoing process that includes active clinician engagement in adaptation of the technology were important elements in successful implementation of physician order entry at the institutions that we studied. PMID:12595408
System Level Aerothermal Testing for the Adaptive Deployable Entry and Placement Technology (ADEPT)
NASA Technical Reports Server (NTRS)
Cassell, Alan; Gorbunov, Sergey; Yount, Bryan; Prabhu, Dinesh; de Jong, Maxim; Boghozian, Tane; Hui, Frank; Chen, Y.-K.; Kruger, Carl; Poteet, Carl;
2016-01-01
The Adaptive Deployable Entry and Placement Technology (ADEPT), a mechanically deployable entry vehicle technology, has been under development at NASA since 2011. As part of the technical maturation of ADEPT, designs capable of delivering small payloads (10 kg) are being considered to rapidly mature sub 1 m deployed diameter designs. The unique capability of ADEPT for small payloads comes from its ability to stow within a slender volume and deploy to achieve a mass efficient drag surface with a high heat rate capability. The low ballistic coefficient results in entry heating and mechanical loads that can be met by a revolutionary three-dimensionally woven carbon fabric supported by a deployable skeleton structure. This carbon fabric has test proven capability as both primary structure and payload thermal protection system. In order to rapidly advance ADEPTs technical maturation, the project is developing test methods that enable thermostructural design requirement verification of ADEPT designs at the system level using ground test facilities. Results from these tests are also relevant to larger class missions and help us define areas of focused component level testing in order to mature material and thermal response design codes. The ability to ground test sub 1 m diameter ADEPT configurations at or near full-scale provides significant value to the rapid maturation of this class of deployable entry vehicles. This paper will summarize arc jet test results, highlight design challenges, provide a summary of lessons learned and discuss future test approaches based upon this methodology.
Ares I-X Best Estimated Trajectory Analysis and Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.
2011-01-01
The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.
Virtual Reality Modelling Simulation of the Re-entry Motion of an Axialsymmetric Vehicle
NASA Astrophysics Data System (ADS)
Guidi, A.; Chu, Q.. P.; Mulder, J. A.
This work started during the stability analysis of the Delft Aerospace Re-entry Test demonstrator (DART) which is a small axisymmetric ballistic re-entry vehicle. The dynamic stability evaluation of an axisymmetric re-entry vehicle is especially concerned on the behaviour of its angle of attack during the flight through the atmosphere. The variation in the angle of attack is essential for prediction of the trajectory of the vehicle and for heating requirement of the structure of the vehicle. The concept of the total angle of attack and the windward meridian plane are introduced. The position of the centre of pressure can be a crucial point in the stability of the vehicle. Although the simpleness of an axisymmetric shape, the re-entry of such a vehicle is characterised by several complex phenomenologies that were analysed with the aid of the flight simulator and of a 3D virtual reality modeling simulator. Simulations were performed with a 25° AOA initial condition in order to simulate the response of the vehicle to a disturbance that may occur during the flight causing a variation in attitude from its Trim . Certain aspects of re-entry vehicle motion are conveniently described in the terms of Euler angles. Using the Eulerian angle it is possible to generate a tridimensional animation of the output of the Flight Simulator. This tridimensional analysis is of great importance in order to understand the mentioned complex motions. Furthermore with growing in computer power it is possible to generate online visualisation of the simulations. The output of the flight simulator was used in a software written in Virtual Reality Modelling Language (VRML). With VRML this software was possible the visualisation of the re-entry motion of the vehicle. With this option the animation can run on-line during the with the flight simulator and can be also easily published on the internet or send to other users in very small file size. (the VRLM simulation of the re-entry, can be seen at the official DART internet site: www.dart-project.com)
Teussink, Michel M.; Cense, Barry; van Grinsven, Mark J.J.P.; Klevering, B. Jeroen; Hoyng, Carel B.; Theelen, Thomas
2015-01-01
A growing body of evidence suggests that phototransduction can be studied in the human eye in vivo by imaging of fast intrinsic optical signals (IOS). There is consensus concerning the limiting influence of motion-associated imaging noise on the reproducibility of IOS-measurements, especially in those employing spectral-domain optical coherence tomography (SD-OCT). However, no study to date has conducted a comprehensive analysis of this noise in the context of IOS-imaging. In this study, we discuss biophysical correlates of IOS, and we address motion-associated imaging noise by providing correctional post-processing methods. In order to avoid cross-talk of adjacent IOS of opposite signal polarity, cellular resolution and stability of imaging to the level of individual cones is likely needed. The optical Stiles-Crawford effect can be a source of significant IOS-imaging noise if alignment with the peak of the Stiles-Crawford function cannot be maintained. Therefore, complete head stabilization by implementation of a bite-bar may be critical to maintain a constant pupil entry position of the OCT beam. Due to depth-dependent sensitivity fall-off, heartbeat and breathing associated axial movements can cause tissue reflectivity to vary by 29% over time, although known methods can be implemented to null these effects. Substantial variations in reflectivity can be caused by variable illumination due to changes in the beam pupil entry position and angle, which can be reduced by an adaptive algorithm based on slope-fitting of optical attenuation in the choriocapillary lamina. PMID:26137369
Li, Dongdong; Chu, Chi Meng; Ng, Wei Chern; Leong, Wai
2014-11-01
This study examines the risk factors of re-entry for 1,750 child protection cases in Singapore using a cumulative ecological-transactional risk model. Using administrative data, the present study found that the overall percentage of Child Protection Service (CPS) re-entry in Singapore is 10.5% based on 1,750 cases, with a range from 3.9% (within 1 year) to 16.5% (within 8 years after case closure). One quarter of the re-entry cases were observed to occur within 9 months from case closure. Seventeen risk factors, as identified from the extant literature, were tested for their utility to predict CPS re-entry in this study using a series of Cox regression analyses. A final list of seven risk factors (i.e., children's age at entry, case type, case closure result, duration of case, household income, family size, and mother's employment status) was used to create a cumulative risk score. The results supported the cumulative risk model in that higher risk score is related to higher risk of CPS re-entry. Understanding the prevalence of CPS re-entry and the risk factors associated with re-entry is the key to informing practice and policy in a culturally relevant way. The results from this study could then be used to facilitate critical case management decisions in order to enhance positive outcomes of families and children in Singapore's care system. Copyright © 2014 Elsevier Ltd. All rights reserved.
Entry, Descent, and Landing technological barriers and crewed MARS vehicle performance analysis
NASA Astrophysics Data System (ADS)
Subrahmanyam, Prabhakar; Rasky, Daniel
2017-05-01
Mars has been explored historically only by robotic crafts, but a crewed mission encompasses several new engineering challenges - high ballistic coefficient entry, hypersonic decelerators, guided entry for reaching intended destinations within acceptable margins for error in the landing ellipse, and payload mass are all critical factors for evaluation. A comprehensive EDL parametric analysis has been conducted in support of a high mass landing architecture by evaluating three types of vehicles -70° Sphere Cone, Ellipsled and SpaceX hybrid architecture called Red Dragon as potential candidate options for crewed entry vehicles. Aerocapture at the Martian orbit of about 400 km and subsequent Entry-from-orbit scenarios were investigated at velocities of 6.75 km/s and 4 km/s respectively. A study on aerocapture corridor over a range of entry velocities (6-9 km/s) suggests that a hypersonic L/D of 0.3 is sufficient for a Martian aerocapture. Parametric studies conducted by varying aeroshell diameters from 10 m to 15 m for several entry masses up to 150 mt are summarized and results reveal that vehicles with entry masses in the range of about 40-80 mt are capable of delivering cargo with a mass on the order of 5-20 mt. For vehicles with an entry mass of 20 mt to 80 mt, probabilistic Monte Carlo analysis of 5000 cases for each vehicle were run to determine the final landing ellipse and to quantify the statistical uncertainties associated with the trajectory and attitude conditions during atmospheric entry. Strategies and current technological challenges for a human rated Entry, Descent, and Landing to the Martian surface are presented in this study.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.
2013-03-01
This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.
Potent D-peptide inhibitors of HIV-1 entry
Welch, Brett D.; VanDemark, Andrew P.; Heroux, Annie; Hill, Christopher P.; Kay, Michael S.
2007-01-01
During HIV-1 entry, the highly conserved gp41 N-trimer pocket region becomes transiently exposed and vulnerable to inhibition. Using mirror-image phage display and structure-assisted design, we have discovered protease-resistant D-amino acid peptides (D-peptides) that bind the N-trimer pocket with high affinity and potently inhibit viral entry. We also report high-resolution crystal structures of two of these D-peptides in complex with a pocket mimic that suggest sources of their high potency. A trimeric version of one of these peptides is the most potent pocket-specific entry inhibitor yet reported by three orders of magnitude (IC50 = 250 pM). These results are the first demonstration that D-peptides can form specific and high-affinity interactions with natural protein targets and strengthen their promise as therapeutic agents. The D-peptides described here address limitations associated with current L-peptide entry inhibitors and are promising leads for the prevention and treatment of HIV/AIDS. PMID:17942675
Tünnermann, Jan; Petersen, Anders; Scharlau, Ingrid
2015-03-02
Selective visual attention improves performance in many tasks. Among others, it leads to "prior entry"--earlier perception of an attended compared to an unattended stimulus. Whether this phenomenon is purely based on an increase of the processing rate of the attended stimulus or if a decrease in the processing rate of the unattended stimulus also contributes to the effect is, up to now, unanswered. Here we describe a novel approach to this question based on Bundesen's Theory of Visual Attention, which we use to overcome the limitations of earlier prior-entry assessment with temporal order judgments (TOJs) that only allow relative statements regarding the processing speed of attended and unattended stimuli. Prevalent models of prior entry in TOJs either indirectly predict a pure acceleration or cannot model the difference between acceleration and deceleration. In a paradigm that combines a letter-identification task with TOJs, we show that indeed acceleration of the attended and deceleration of the unattended stimuli conjointly cause prior entry. © 2015 ARVO.
NASA Technical Reports Server (NTRS)
Stackpoole, Mairead
2014-01-01
NASA's future robotic missions to Venus and outer planets, namely, Saturn, Uranus, Neptune, result in extremely high entry conditions that exceed the capabilities of current mid-density ablators (PICA or Avcoat). Therefore mission planners assume the use of a fully dense carbon phenolic heat shield similar to what was flown on Pioneer Venus and Galileo. Carbon phenolic (CP) is a robust Thermal Protection System (TPS) however its high density and thermal conductivity constrain mission planners to steep entries, high heat fluxes, pressures and short entry durations, in order for CP to be feasible from a mass perspective. The high entry conditions pose certification challenges in existing ground based test facilities. In 2012 the Game Changing Development Program in NASA's Space Technology Mission Directorate funded NASA ARC to investigate the feasibility of a Woven Thermal Protection System (WTPS) to meet the needs of NASA's most challenging entry missions. This presentation will summarize maturation of the WTPS project.
Equivalent Indels – Ambiguous Functional Classes and Redundancy in Databases
Assmus, Jens; Kleffe, Jürgen; Schmitt, Armin O.; Brockmann, Gudrun A.
2013-01-01
There is considerable interest in studying sequenced variations. However, while the positions of substitutions are uniquely identifiable by sequence alignment, the location of insertions and deletions still poses problems. Each insertion and deletion causes a change of sequence. Yet, due to low complexity or repetitive sequence structures, the same indel can sometimes be annotated in different ways. Two indels which differ in allele sequence and position can be one and the same, i.e. the alternative sequence of the whole chromosome is identical in both cases and, therefore, the two deletions are biologically equivalent. In such a case, it is impossible to identify the exact position of an indel merely based on sequence alignment. Thus, variation entries in a mutation database are not necessarily uniquely defined. We prove the existence of a contiguous region around an indel in which all deletions of the same length are biologically identical. Databases often show only one of several possible locations for a given variation. Furthermore, different data base entries can represent equivalent variation events. We identified 1,045,590 such problematic entries of insertions and deletions out of 5,860,408 indel entries in the current human database of Ensembl. Equivalent indels are found in sequence regions of different functions like exons, introns or 5' and 3' UTRs. One and the same variation can be assigned to several different functional classifications of which only one is correct. We implemented an algorithm that determines for each indel database entry its complete set of equivalent indels which is uniquely characterized by the indel itself and a given interval of the reference sequence. PMID:23658777
Post-Flight EDL Entry Guidance Performance of the 2011 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
Mendeck, Gavin F.; McGrew, Lynn Craig
2012-01-01
The 2011 Mars Science Laboratory was the first successful Mars mission to attempt a guided entry which safely delivered the rover to a final position approximately 2 km from its target within a touchdown ellipse of 19.1 km x 6.9 km. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control the range flown. For application to Mars landers which must make use of the tenuous Martian atmosphere, it is critical to balance the lift of the vehicle to minimize the range error while still ensuring a safe deploy altitude. An overview of the process to generate optimized guidance settings is presented, discussing improvements made over the last nine years. Key dispersions driving deploy ellipse and altitude performance are identified. Performance sensitivities including attitude initialization error and the velocity of transition from range control to heading alignment are presented. Just prior to the entry and landing of MSL in August 2012, the EDL team examined minute tuning of the reference trajectory for the selected landing site, analyzed whether adjustment of bank reversal deadbands were necessary, the heading alignment velocity trigger was in union with other parameters to balance the EDL risks, and the vertical L/D command limits. This paper details a preliminary postflight assessment of the telemetry and trajectory reconstruction that is being performed, and updates the information presented in the former paper Entry Guidance for the 2011 Mars Science Laboratory Mission (AIAA Atmospheric Flight Mechanics Conference; 8-11 Aug. 2011; Portland, OR; United States)
Analytic Development of a Reference Profile for the First Entry in a Skip Atmospheric Entry
NASA Technical Reports Server (NTRS)
Garcia-Llama, Eduardo
2010-01-01
This note shows that a feasible reference drag profile for the first entry portion of a skip entry can be generated as a polynomial expression of the velocity. The coefficients of that polynomial are found through the resolution of a system composed of m + 1 equations, where m is the degree of the drag polynomial. It has been shown that a minimum of five equations (m = 4) are required to establish the range and the initial and final conditions on velocity and flight path angle. It has been shown that at least one constraint on the trajectory can be imposed through the addition of one extra equation in the system, which must be accompanied by the increase in the degree of the drag polynomial. In order to simplify the resolution of the system of equations, the drag was considered as being a probability density function of the velocity, with the velocity as a distribution function of the drag. Combining this notion with the introduction of empirically derived constants, it has been shown that the system of equations required to generate the drag profile can be successfully reduced to a system of linear algebraic equations. For completeness, the resulting drag profiles have been flown using the feedback linearization method of differential geometric control as a guidance law with the error dynamics of a second order homogeneous equation in the form of a damped oscillator. Satisfactory results were achieved when the gains in the error dynamics were changed at a certain point along the trajectory that is dependent on the velocity and the curvature of the drag as a function of the velocity. Future work should study the capacity to update the drag profile in flight when dispersions are introduced. Also, future studies should attempt to link the first entry, as presented and controlled in this note, with a more standard control concept for the second entry, such as the Apollo entry guidance, to try to assess the overall skip entry performance. A guidance law that includes an integral feedback term, as is the case in the actual Space Shuttle entry guidance and as is proposed in Ref 29, could be tried in future studies to assess whether its use results in an improvement of the tracking performance, and to evaluate the design needs when determining the control gains.
Multiscale high-order/low-order (HOLO) algorithms and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Anderson, I M; Bezdek, J C
1984-01-01
This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
... Change The Exchange proposes to clarify the definition of ``Directed Order'' in Rule 1080(l)(i)(A) by... ``Order Flow Provider'' is proposed to be made in Rule 1080(l)(i)(B). Second, amendments to Rule 1080(b)(i... opening orders to the list of eligible orders in Rule 1080(b)(i), as order types eligible for entry into...
Froňka, A; Jílek, K; Moučka, L; Brabec, M
2011-05-01
Two new single-family houses identified as insufficient with regard to existing radon barrier efficiency, have been selected for further examination. A complex set of radon diagnosis procedures has been applied in order to localise and quantify radon entry pathways into the indoor environment. Independent assessment of radon entry rate and air exchange rate has been carried out using the continuous indoor radon measurement and a specific tracer gas application. Simultaneous assessment of these key determining factors has turned out to be absolutely crucial in the context of major cause identification of elevated indoor radon concentration.
Entry, Descent and Landing Systems Analysis Study: Phase 1 Report
NASA Technical Reports Server (NTRS)
DwyerCianciolo, Alicia M.; Davis, Jody L.; Komar, David R.; Munk, Michelle M.; Samareh, Jamshid A.; Powell, Richard W.; Shidner, Jeremy D.; Stanley, Douglas O.; Wilhite, Alan W.; Kinney, David J.;
2010-01-01
NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and human-scale missions. This paper summarizes the motivation, approach and top-level results from Year 1 of the study, which focused on landing 10-50 mt on Mars, but also included a trade study of the best advanced parachute design for increasing the landed payloads within the EDL architecture of the Mars Science Laboratory (MSL) mission
A capacitated vehicle routing problem with order available time in e-commerce industry
NASA Astrophysics Data System (ADS)
Liu, Ling; Li, Kunpeng; Liu, Zhixue
2017-03-01
In this article, a variant of the well-known capacitated vehicle routing problem (CVRP) called the capacitated vehicle routing problem with order available time (CVRPOAT) is considered, which is observed in the operations of the current e-commerce industry. In this problem, the orders are not available for delivery at the beginning of the planning period. CVRPOAT takes all the assumptions of CVRP, except the order available time, which is determined by the precedent order picking and packing stage in the warehouse of the online grocer. The objective is to minimize the sum of vehicle completion times. An efficient tabu search algorithm is presented to tackle the problem. Moreover, a Lagrangian relaxation algorithm is developed to obtain the lower bounds of reasonably sized problems. Based on the test instances derived from benchmark data, the proposed tabu search algorithm is compared with a published related genetic algorithm, as well as the derived lower bounds. Also, the tabu search algorithm is compared with the current operation strategy of the online grocer. Computational results indicate that the gap between the lower bounds and the results of the tabu search algorithm is small and the tabu search algorithm is superior to the genetic algorithm. Moreover, the CVRPOAT formulation together with the tabu search algorithm performs much better than the current operation strategy of the online grocer.
Reduction in chemotherapy order errors with computerized physician order entry.
Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J
2014-01-01
To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.
Memory-Scalable GPU Spatial Hierarchy Construction.
Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D
2011-04-01
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
Aeronautical engineering, a special bibliography, September 1971 (supplement 10)
NASA Technical Reports Server (NTRS)
1971-01-01
This supplement to Aeronautical Engineering-A Special Bibliography (NASA SP-7037) lists 413 reports, journal articles, and other documents originally announced in September 1971 in Scientific and Technical Aerospace Reports (STAR) or in International Aerospace Abstracts (IAA). The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles. Each entry in the bibliography consists of a standard bibliographic citation accompanied by an abstract. The listing of the entries is arranged in two major sections, IAA Entries and STAR Entries in that order. The citations and abstracts are reproduced exactly as they appeared originally in IAA or STAR, including the original accession numbers from the respective announcement journals.
Automated quantitative assessment of proteins' biological function in protein knowledge bases.
Mayr, Gabriele; Lepperdinger, Günter; Lackner, Peter
2008-01-01
Primary protein sequence data are archived in databases together with information regarding corresponding biological functions. In this respect, UniProt/Swiss-Prot is currently the most comprehensive collection and it is routinely cross-examined when trying to unravel the biological role of hypothetical proteins. Bioscientists frequently extract single entries and further evaluate those on a subjective basis. In lieu of a standardized procedure for scoring the existing knowledge regarding individual proteins, we here report about a computer-assisted method, which we applied to score the present knowledge about any given Swiss-Prot entry. Applying this quantitative score allows the comparison of proteins with respect to their sequence yet highlights the comprehension of functional data. pfs analysis may be also applied for quality control of individual entries or for database management in order to rank entry listings.
Aiello, Lloyd Paul; Beck, Roy W; Bressler, Neil M.; Browning, David J.; Chalam, KV; Davis, Matthew; Ferris, Frederick L; Glassman, Adam; Maturi, Raj; Stockdale, Cynthia R.; Topping, Trexler
2011-01-01
Objective Describe the underlying principles used to develop a web-based algorithm that incorporated intravitreal anti-vascular endothelial growth factor (anti-VEGF) treatment for diabetic macular edema (DME) in a Diabetic Retinopathy Clinical Research Network (DRCR.net) randomized clinical trial. Design Discussion of treatment protocol for DME. Participants Subjects with vision loss from DME involving the center of the macula. Methods The DRCR.net created an algorithm incorporating anti-VEGF injections in a comparative effectiveness randomized clinical trial evaluating intravitreal ranibizumab with prompt or deferred (≥24 weeks) focal/grid laser in eyes with vision loss from center-involved DME. Results confirmed that intravitreal ranibizumab with prompt or deferred laser provides superior visual acuity outcomes, compared with prompt laser alone through at least 2 years. Duplication of this algorithm may not be practical for clinical practice. In order to share their opinion on how ophthalmologists might emulate the study protocol, participating DRCR.net investigators developed guidelines based on the algorithm's underlying rationale. Main Outcome Measures Clinical guidelines based on a DRCR.net protocol. Results The treatment protocol required real time feedback from a web-based data entry system for intravitreal injections, focal/grid laser, and follow-up intervals. Guidance from this system indicated whether treatment was required or given at investigator discretion and when follow-up should be scheduled. Clinical treatment guidelines, based on the underlying clinical rationale of the DRCR.net protocol, include repeating treatment monthly as long as there is improvement in edema compared with the previous month, or until the retina is no longer thickened. If thickening recurs or worsens after discontinuing treatment, treatment is resumed. Conclusions Duplication of the approach used in the DRCR.net randomized clinical trial to treat DME involving the center of the macula with intravitreal ranibizumab may not be practical in clinical practice, but likely can be emulated based on an understanding of the underlying rationale for the study protocol. Inherent differences between a web-based treatment algorithm and a clinical approach may lead to differences in outcomes that are impossible to predict. The closer the clinical approach is to the algorithm used in the study, the more likely the outcomes will be similar to those published. PMID:22136692
Vandersickel, Nele; Bossu, Alexandre; De Neve, Jan; Dunnink, Albert; Meijborg, Veronique M F; van der Heyden, Marcel A G; Beekman, Jet D M; De Bakker, Jacques M T; Vos, Marc A; Panfilov, Alexander V
2017-12-26
This study investigated the arrhythmogenic mechanisms responsible for torsade de pointes (TdP) in the chronic atrioventricular block dog model, known for its high susceptibility for TdP. The mechanism of TdP arrhythmias has been under debate for many years. Focal activity as well as re-entry have both been mentioned in the initiation and the perpetuation of TdP. In 5 TdP-sensitive chronic atrioventricular block dogs, 56 needle electrodes were evenly distributed transmurally to record 240 unipolar local electrograms simultaneously. Nonterminating (NT) episodes were defibrillated after 10 s. Software was developed to automatically detect activation times and to create 3-dimensional visualizations of the arrhythmia. For each episode of ectopic activity (ranging from 2 beats to NT episodes), a novel methodology was created to construct directed graphs of the wave propagation and detect re-entry loops by using an iterative depth-first-search algorithm. Depending on the TdP definition (number of consecutive ectopic beats), we analyzed 29 to 54 TdP: 29 were longer than 5 beats. In the total group, 9 were NT and 45 were self-terminating. Initiation and termination were always based on focal activity. Re-entry becomes more important in the longer-lasting episodes (>14 beats), whereas in all NT TdP, re-entry was the last active mechanism. During re-entry, excitation fronts were constantly present in the heart, while during focal TdP, there was always a silent interval between 2 consecutive waves (142 ms) during which excitation fronts were absent. Interbeat intervals were significantly smaller for re-entry episodes-220 versus 310 ms in focal. Electrograms recorded in particular areas during NT TdP episodes had significantly smaller amplitude (0.38) than during focal episodes (0.59). TdP can be driven by focal activity as well as by re-entry depending on the duration of the episode. NT episodes are always maintained by re-entry, which can be identified in local unipolar electrograms by shorter interbeat intervals and smaller deflection amplitude. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
Optimization of entry-vehicle shapes during conceptual design
NASA Astrophysics Data System (ADS)
Dirkx, D.; Mooij, E.
2014-01-01
During the conceptual design of a re-entry vehicle, the vehicle shape and geometry can be varied and its impact on performance can be evaluated. In this study, the shape optimization of two classes of vehicles has been studied: a capsule and a winged vehicle. Their aerodynamic characteristics were analyzed using local-inclination methods, automatically selected per vehicle segment. Entry trajectories down to Mach 3 were calculated assuming trimmed conditions. For the winged vehicle, which has both a body flap and elevons, a guidance algorithm to track a reference heat-rate was used. Multi-objective particle swarm optimization was used to optimize the shape using objectives related to mass, volume and range. The optimizations show a large variation in vehicle performance over the explored parameter space. Areas of very strong non-linearity are observed in the direct neighborhood of the two-dimensional Pareto fronts. This indicates the need for robust exploration of the influence of vehicle shapes on system performance during engineering trade-offs, which are performed during conceptual design. A number of important aspects of the influence of vehicle behavior on the Pareto fronts are observed and discussed. There is a nearly complete convergence to narrow-wing solutions for the winged vehicle. Also, it is found that imposing pitch-stability for the winged vehicle at all angles of attack results in vehicle shapes which require upward control surface deflections during the majority of the entry.
Orion Entry, Descent, and Landing Simulation
NASA Technical Reports Server (NTRS)
Hoelscher, Brian R.
2007-01-01
The Orion Entry, Descent, and Landing simulation was created over the past two years to serve as the primary Crew Exploration Vehicle guidance, navigation, and control (GN&C) design and analysis tool at the National Aeronautics and Space Administration (NASA). The Advanced NASA Technology Architecture for Exploration Studies (ANTARES) simulation is a six degree-of-freedom tool with a unique design architecture which has a high level of flexibility. This paper describes the decision history and motivations that guided the creation of this simulation tool. The capabilities of the models within ANTARES are presented in detail. Special attention is given to features of the highly flexible GN&C architecture and the details of the implemented GN&C algorithms. ANTARES provides a foundation simulation for the Orion Project that has already been successfully used for requirements analysis, system definition analysis, and preliminary GN&C design analysis. ANTARES will find useful application in engineering analysis, mission operations, crew training, avionics-in-the-loop testing, etc. This paper focuses on the entry simulation aspect of ANTARES, which is part of a bigger simulation package supporting the entire mission profile of the Orion vehicle. The unique aspects of entry GN&C design are covered, including how the simulation is being used for Monte Carlo dispersion analysis and for support of linear stability analysis. Sample simulation output from ANTARES is presented in an appendix.
Ross, Jonathan; Hanna, David B; Felsen, Uriel R; Cunningham, Chinazo O; Patel, Viraj V
2017-12-01
Little is known about how HIV affects undocumented immigrants despite social and structural factors that may place them at risk of poor HIV outcomes. Our understanding of the clinical epidemiology of HIV-infected undocumented immigrants is limited by the challenges of determining undocumented immigration status in large data sets. We developed an algorithm to predict undocumented status using social security number (SSN) and insurance data. We retrospectively applied this algorithm to a cohort of HIV-infected adults receiving care at a large urban healthcare system who attended at least one HIV-related outpatient visit from 1997 to 2013, classifying patients as "screened undocumented" or "documented". We then reviewed the medical records of screened undocumented patients, classifying those whose records contained evidence of undocumented status as "undocumented per medical chart" (charted undocumented). Bivariate measures of association were used to identify demographic and clinical characteristics associated with undocumented immigrant status. Of 7593 patients, 205 (2.7%) were classified as undocumented by the algorithm. Compared to documented patients, undocumented patients were younger at entry to care (mean 38.5 years vs. 40.6 years, p < 0.05), less likely to be female (33.2% vs. 43.1%, p < 0.01), less likely to report injection drug use as their primary HIV risk factor (3.4% vs. 18.0%, p < 0.001), and had lower median CD4 count at entry to care (288 vs. 339 cells/mm 3 , p < 0.01). After medical record review, we re-classified 104 patients (50.7%) as charted undocumented. Demographic and clinical characteristics of charted undocumented did not differ substantially from screened undocumented. Our algorithm allowed us to identify and clinically characterize undocumented immigrants within an HIV-infected population, though it overestimated the prevalence of patients who were undocumented.
Ink dating part II: Interpretation of results in a legal perspective.
Koenig, Agnès; Weyermann, Céline
2018-01-01
The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the trend tests that focusing on the "ageing status" of an ink entry, and (3) the likelihood ratio calculation comparing the probabilities to observe the results under at least two alternative hypotheses. This is the first report showing ink dating interpretation results on a ballpoint be ink reference population. In the first part of this paper three ageing parameters were selected as promising from the population of 25 ink entries aged during 4 to 304days: the quantity of phenoxyethanol (PE), the difference between the PE quantities contained in a naturally aged sample and an artificially aged sample (R NORM ) and the solvent loss ratio (R%). In the current part, each model was tested using the three selected ageing parameters. Results showed that threshold definition remains a simple model easily applicable in practice, but that the risk of false positive cannot be completely avoided without reducing significantly the feasibility of the ink dating approaches. The trend tests from the literature showed unreliable results and an alternative had to be developed yielding encouraging results. The likelihood ratio calculation introduced a degree of certainty to the ink dating conclusion in comparison to the threshold approach. The proposed model remains quite simple to apply in practice, but should be further developed in order to yield reliable results in practice. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-17
... system protocol, to the NASDAQ matching engine or to the NASDAQ router as needed to complete the... Order application to the order entry gateway of NASDAQ's matching engine, but the amount of time gained... proposal represents another example of the blurring borders between exchanges and broker-dealers, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-06
... order upon entry and in certain cases again re-prices and re-displays an order at a more aggressive... extent it achieves a more aggressive price. However, the Exchange proposes to re-rank an order at the same price as the displayed price (i.e., a less aggressive price) in the event such order's displayed...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
...] Public Land Order No. 7790; Withdrawal of Public Lands for the Parting of the Ways National Historic Site; Wyoming AGENCY: Bureau of Land Management, Interior. ACTION: Public land order. SUMMARY: This order withdraws 40 acres of public land from settlement, sale, location, and entry under the general land laws...
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Song, Jia; Zheng, Sisi; Nguyen, Nhung; Wang, Youjun; Zhou, Yubin; Lin, Kui
2017-10-03
Because phylogenetic inference is an important basis for answering many evolutionary problems, a large number of algorithms have been developed. Some of these algorithms have been improved by integrating gene evolution models with the expectation of accommodating the hierarchy of evolutionary processes. To the best of our knowledge, however, there still is no single unifying model or algorithm that can take all evolutionary processes into account through a stepwise or simultaneous method. On the basis of three existing phylogenetic inference algorithms, we built an integrated pipeline for inferring the evolutionary history of a given gene family; this pipeline can model gene sequence evolution, gene duplication-loss, gene transfer and multispecies coalescent processes. As a case study, we applied this pipeline to the STIMATE (TMEM110) gene family, which has recently been reported to play an important role in store-operated Ca 2+ entry (SOCE) mediated by ORAI and STIM proteins. We inferred their phylogenetic trees in 69 sequenced chordate genomes. By integrating three tree reconstruction algorithms with diverse evolutionary models, a pipeline for inferring the evolutionary history of a gene family was developed, and its application was demonstrated.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
Crater Identification Algorithm for the Lost in Low Lunar Orbit Scenario
NASA Technical Reports Server (NTRS)
Hanak, Chad; Crain, TImothy
2010-01-01
Recent emphasis by NASA on returning astronauts to the Moon has placed attention on the subject of lunar surface feature tracking. Although many algorithms have been proposed for lunar surface feature tracking navigation, much less attention has been paid to the issue of navigational state initialization from lunar craters in a lost in low lunar orbit (LLO) scenario. That is, a scenario in which lunar surface feature tracking must begin, but current navigation state knowledge is either unavailable or too poor to initiate a tracking algorithm. The situation is analogous to the lost in space scenario for star trackers. A new crater identification algorithm is developed herein that allows for navigation state initialization from as few as one image of the lunar surface with no a priori state knowledge. The algorithm takes as inputs the locations and diameters of craters that have been detected in an image, and uses the information to match the craters to entries in the USGS lunar crater catalog via non-dimensional crater triangle parameters. Due to the large number of uncataloged craters that exist on the lunar surface, a probability-based check was developed to reject false identifications. The algorithm was tested on craters detected in four revolutions of Apollo 16 LLO images, and shown to perform well.
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Way, David W.
2017-01-01
Mars 2020, the next planned U.S. rover mission to land on Mars, is based on the design of the successful 2012 Mars Science Laboratory (MSL) mission. Mars 2020 retains most of the entry, descent, and landing (EDL) sequences of MSL, including the closed-loop entry guidance scheme based on the Apollo guidance algorithm. However, unlike MSL, Mars 2020 will trigger the parachute deployment and descent sequence on range trigger rather than the previously used velocity trigger. This difference will greatly reduce the landing ellipse sizes. Additionally, the relative contribution of each models to the total ellipse sizes have changed greatly due to the switch to range trigger. This paper considers the effect on trajectory dispersions due to changing the trigger schemes and the contributions of these various models to trajectory and EDL performance.
NASA Astrophysics Data System (ADS)
Uddin, M. Maruf; Fuad, Muzaddid-E.-Zaman; Rahaman, Md. Mashiur; Islam, M. Rabiul
2017-12-01
With the rapid decrease in the cost of computational infrastructure with more efficient algorithm for solving non-linear problems, Reynold's averaged Navier-Stokes (RaNS) based Computational Fluid Dynamics (CFD) has been used widely now-a-days. As a preliminary evaluation tool, CFD is used to calculate the hydrodynamic loads on offshore installations, ships, and other structures in the ocean at initial design stages. Traditionally, wedges have been studied more than circular cylinders because cylinder section has zero deadrise angle at the instant of water impact, which increases with increase of submergence. In Present study, RaNS based commercial code ANSYS Fluent is used to simulate the water entry of a circular section at constant velocity. It is seen that present computational results were compared with experiment and other numerical method.
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
Orion Entry Performance-Based Center-of-Gravity Box
NASA Technical Reports Server (NTRS)
Rea, Jeremy R.
2010-01-01
The Orion capsule is designed both for Low Earth Orbit missions to the ISS and for missions to the moon. For ISS class missions, the capsule will use an Apollo-style direct entry. For lunar return missions, depending on the timing of the mission, the capsule could perform a direct entry or a skip entry of up to 4800 n.mi. in order to land in the coastal waters of California. The physics of atmospheric re-entry determine the capability of the Orion vehicle. For a given vehicle mass and shape, physics tells us that the driving parameters for an entry vehicle are the hypersonic lift-to-drag ratio (L/D) and the flight path angle at entry interface (gamma(sub EI)). The design of the Orion atmospheric re-entry must meet constraints during both nominal and dispersed flight conditions on landing accuracy, heating rate, total heat load, sensed acceleration, and proper disposal of the Service Module. These constraints define an entry corridor in the space of L/D-gamma(sub EI); if the vehicle falls within this corridor, then all constraints are met. The gamma(sub EI) dimension of the corridor can be further constrained by the gloads experienced during emergency entries. Thus, the entry performance for the Orion vehicle can be described completely by the L/D. Bounds on the hypersonic L/D necessary to achieve all the mission requirements can be defined for the given entry corridor. Landing accuracy performance drives the lower limit on L/D. In order to achieve the desired landing accuracy, a minimum L/D must be ensured. The design of the Thermal Protection System (TPS) drives the upper limit on L/D. A higher L/D can drive mass into the design of the TPS. Conversely, once the TPS is designed, the L/D must be ensured to stay below a certain limit in order for the TPS to stay within its design envelop. The L/D must stay within its upper and lower bounds during dispersed flight conditions. L/D is a function of both the aerodynamics and the center-of-gravity (CG) of the vehicle. The aerodynamics of the vehicle are determined by Computational Fluid Mechanics (CFD) and wind tunnel tests. However, the aerodynamics are not known precisely. Instead, an aerodynamic database has been developed where the aerodynamic coefficients are known to fall within a probabilistic band defined by upper and lower bounds. It is expected that the probabilistic band will shrink after the first missions are flown and real-world data is collected. Until that time, the Orion must be designed to the current aerodynamic database. Thus, for a given aerodynamic database with given uncertainties, the allowable range in L/D can be mapped to an allowable box for the CG location. The CG box is used to set requirements on the dispersions allowed for vehicle packaging and cargo storage. As the aerodynamic uncertainties decrease, the size of the CG box can increase. This paper discusses the technique used to map the minimum and maximum L/D bounds set by the entry performance requirements to the allowable dispersions in CG while accounting for aerodynamic uncertainties. The L/D is defined as the ratio of the lift force to the drag force. It is equivalent to the ratio of lift coefficient (C(sub L)) over drag coefficient (C(sub D)). C(sub L) and C(sub D) are functions of Mach number (M) and angle of attack (alpha). A Mach number of 25 is used as a measuring point of the hypersonic L/D. Variations in C(sub L), C(sub D) and alpha cause variations in L/D. Equation (1) shows the three contributions to the variation in L/D.
Communications Blackout Predictions for Atmospheric Entry of Mars Science Laboratory
NASA Technical Reports Server (NTRS)
Morabito, David D.; Edquist, Karl
2005-01-01
The Mars Science Laboratory (MSL) is expected to be a long-range, long-duration science laboratory rover on the Martian surface. MSL will provide a significant milestone that paves the way for future landed missions to Mars. NASA is studying options to launch MSL as early as 2009. MSL will be the first mission to demonstrate the new technology of 'smart landers', which include precision landing and hazard avoidance in order to -land at scientifically interesting sites that would otherwise be unreachable. There are three elements to the spacecraft; carrier (cruise stage), entry vehicle, and rover. The rover will have an X-band direct-to-Earth (DTE) link as well as a UHF proximity link. There is also a possibility of an X-band proximity link. Given the importance of collecting critical event telemetry data during atmospheric entry, it is important to understand the ability of a signal link to be maintained, especially during the period near peak convective heating. The received telemetry during entry (or played back later) will allow for the performance of the Entry-Descent-Landing technologies to be assessed. These technologies include guided entry for precision landing, hazard avoidance, a new sky-crane landing system and powered descent. MSL will undergo an entry profile that may result in a potential communications blackout caused by ionized plasma for short periods near peak heating. The vehicle will use UHF and possibly X-band during the entry phase. The purpose of this report is to quantify or bound the likelihood of any such blackout at UHF frequencies (401 MHz) and X-band frequencies (8.4 GHz). Two entry trajectory scenarios were evaluated: a stressful entry trajectory to quantify an upper-bound for any possible blackout period, and a nominal likely trajectory to quantify likelihood of blackout for such cases.
1983-03-01
Sysiem are: Order processinq coordinators Order processing management Credit and collections Accounts receivable Support management Admin ianagemenr...or sales secretary, then by order processing (OP). Phone-in orders go directly to OP. The infor- mation is next Transcribed onto an order entry... ORDER PROCESSING : The central systems validate The order items and codes t!, processing them against the customer file, the prodicT or PA? ts file, and
Cartmill, Randi S; Walker, James M; Blosky, Mary Ann; Brown, Roger L; Djurkovic, Svetolik; Dunham, Deborah B; Gardill, Debra; Haupt, Marilyn T; Parry, Dean; Wetterneck, Tosha B; Wood, Kenneth E; Carayon, Pascale
2012-11-01
To examine the effect of implementing electronic order management on the timely administration of antibiotics to critical-care patients. We used a prospective pre-post design, collecting data on first-dose IV antibiotic orders before and after the implementation of an integrated electronic medication-management system, which included computerized provider order entry (CPOE), pharmacy order processing and an electronic medication administration record (eMAR). The research was performed in a 24-bed adult medical/surgical ICU in a large, rural, tertiary medical center. Data on the time of ordering, pharmacy processing and administration were prospectively collected and time intervals for each stage and the overall process were calculated. The overall turnaround time from ordering to administration significantly decreased from a median of 100 min before order management implementation to a median of 64 min after implementation. The first part of the medication use process, i.e., from order entry to pharmacy processing, improved significantly whereas no change was observed in the phase from pharmacy processing to medication administration. The implementation of an electronic order-management system improved the timeliness of antibiotic administration to critical-care patients. Additional system changes are required to further decrease the turnaround time. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-11
... recycling magnesium-based scrap into magnesium metal. The magnesium covered by the order includes blends of... deposits at the rates in effect at the time of entry for all imports of subject merchandise. The effective...
Medication Waste Reduction in Pediatric Pharmacy Batch Processes
Veltri, Michael A.; Hamrock, Eric; Mollenkopf, Nicole L.; Holt, Kristen; Levin, Scott
2014-01-01
OBJECTIVES: To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. METHODS: A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. RESULTS: Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. CONCLUSIONS: The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste. PMID:25024671
Medication waste reduction in pediatric pharmacy batch processes.
Toerper, Matthew F; Veltri, Michael A; Hamrock, Eric; Mollenkopf, Nicole L; Holt, Kristen; Levin, Scott
2014-04-01
To inform pediatric cart-fill batch scheduling for reductions in pharmaceutical waste using a case study and simulation analysis. A pre and post intervention and simulation analysis was conducted during 3 months at a 205-bed children's center. An algorithm was developed to detect wasted medication based on time-stamped computerized provider order entry information. The algorithm was used to quantify pharmaceutical waste and associated costs for both preintervention (1 batch per day) and postintervention (3 batches per day) schedules. Further, simulation was used to systematically test 108 batch schedules outlining general characteristics that have an impact on the likelihood for waste. Switching from a 1-batch-per-day to a 3-batch-per-day schedule resulted in a 31.3% decrease in pharmaceutical waste (28.7% to 19.7%) and annual cost savings of $183,380. Simulation results demonstrate how increasing batch frequency facilitates a more just-in-time process that reduces waste. The most substantial gains are realized by shifting from a schedule of 1 batch per day to at least 2 batches per day. The simulation exhibits how waste reduction is also achievable by avoiding batch preparation during daily time periods where medication administration or medication discontinuations are frequent. Last, the simulation was used to show how reducing batch preparation time per batch provides some, albeit minimal, opportunity to decrease waste. The case study and simulation analysis demonstrate characteristics of batch scheduling that may support pediatric pharmacy managers in redesign toward minimizing pharmaceutical waste.
Efficient Numerical Diagonalization of Hermitian 3 × 3 Matrices
NASA Astrophysics Data System (ADS)
Kopp, Joachim
A very common problem in science is the numerical diagonalization of symmetric or hermitian 3 × 3 matrices. Since standard "black box" packages may be too inefficient if the number of matrices is large, we study several alternatives. We consider optimized implementations of the Jacobi, QL, and Cuppen algorithms and compare them with an alytical method relying on Cardano's formula for the eigenvalues and on vector cross products for the eigenvectors. Jacobi is the most accurate, but also the slowest method, while QL and Cuppen are good general purpose algorithms. The analytical algorithm outperforms the others by more than a factor of 2, but becomes inaccurate or may even fail completely if the matrix entries differ greatly in magnitude. This can mostly be circumvented by using a hybrid method, which falls back to QL if conditions are such that the analytical calculation might become too inaccurate. For all algorithms, we give an overview of the underlying mathematical ideas, and present detailed benchmark results. C and Fortran implementations of our code are available for download from .
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
New multirate sampled-data control law structure and synthesis algorithm
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng
1992-01-01
A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.
Impact of a Commercially Available Clinical Decision Support Program on Provider Ordering Habits.
Huber, Timothy C; Krishnaraj, Arun; Patrie, James; Gaskin, Cree M
2018-05-18
Clinical decision support (CDS) software designed around the ACR Appropriateness Criteria assists health care providers in choosing appropriate imaging studies at the time of order entry. The goal of this study was to determine the impact of commercially available CDS on the ordering habits of inpatient and emergency providers. In 2014, ACR Select was integrated into our electronic health record, though without displaying appropriateness scores in a "silent" mode for 6 months. Then, feedback regarding examination appropriateness was "turned on" at order entry for adult patients in the emergency and inpatient settings for 24 months. We retrospectively compared the appropriateness scores of imaging tests before and after displaying feedback at order entry and evaluated these data by modality and attending versus trainee status. The commercially available CDS-generated scores for 34% and 20.4% of pre- and postintervention studies, respectively. After feedback, the relative frequency of low utility studies decreased to 5.4% from 11%, and the relative frequency of indicated studies increased to 82% from 64.5%. This was most pronounced in trainees for whom the percentage of low utility studies decreased from 10.8% (95% confidence interval [CI]: 10.0%, 11.7%) to 4.8% (95% CI: 4.4%, 5.2%) and the percentage of indicated studies increased from 65.6% (95% CI: 64.3%, 66.9%) to 83.7% (83.0%, 84.3%). After implementation of a commercially available decision support tool integrated into the electronic health record, there was a significant improvement in imaging study appropriateness scores, more pronounced in studies ordered by trainees. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Improving Memory for Optimization and Learning in Dynamic Environments
2011-07-01
algorithm uses simple, in- cremental clustering to separate solutions into memory entries. The cluster centers are used as the models in the memory. This is...entire days of traffic with realistic traffic de - mands and turning ratios on a 32 intersection network modeled on downtown Pittsburgh, Pennsyl- vania...early/tardy problem. Management Science, 35(2):177–191, 1989. [78] Daniel Parrott and Xiaodong Li. A particle swarm model for tracking multiple peaks in
Human-Robot Teams Informed by Human Performance Moderator Functions
2012-08-29
seem to converge probably because situation is bad enough that any algorithm would perform just as well. Figure 29 shows the set commonality graph...burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing...for writing the report, performing the research, or credited with the content of the report. The form of entry is the last name, first name, middle
Evolutionary Algorithm Based Automated Reverse Engineering and Defect Discovery
2007-09-21
a previous application of a GP as a data mining function to evolve fuzzy decision trees symbolically [3-5], the terminal set consisted of fuzzy...of input and output information is required. In the case of fuzzy decision trees, the database represented a collection of scenarios about which the...fuzzy decision tree to be evolved would make decisions . The database also had entries created by experts representing decisions about the scenarios
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Characterization of soluble glycoprotein D-mediated herpes simplex virus type 1 infection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsvitov, Marianna; Frampton, Arthur R.; Shah, Waris A.
2007-04-10
Herpes simplex virus type 1 (HSV-1) entry into permissive cells involves attachment to cell-surface glycosaminoglycans (GAGs) and fusion of the virus envelope with the cell membrane triggered by the binding of glycoprotein D (gD) to cognate receptors. In this study, we characterized the observation that soluble forms of the gD ectodomain (sgD) can mediate entry of gD-deficient HSV-1. We examined the efficiency and receptor specificity of this activity and used sequential incubation protocols to determine the order and stability of the initial interactions required for entry. Surprisingly, virus binding to GAGs did not increase the efficiency of sgD-mediated entry andmore » gD-deficient virus was capable of attaching to GAG-deficient cells in the absence of sgD. These observations suggested a novel binding interaction that may play a role in normal HSV infection.« less
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Effects of computerized prescriber order entry on pharmacy order-processing time.
Wietholter, Jon; Sitterson, Susan; Allison, Steven
2009-08-01
The effect of computerized prescriber order entry (CPOE) on the efficiency of medication-order-processing time was evaluated. This study was conducted at a 761-bed, tertiary care hospital. A total of 2988 medication orders were collected and analyzed before (n = 1488) and after CPOE implementation (n = 1500). Data analyzed included the time the prescriber ordered the medication, the time the pharmacy received the order, and the time the order was completed by a pharmacist. The mean order-processing time before CPOE implementation was 115 minutes from prescriber composition to pharmacist verification. After CPOE implementation, the mean order-processing time was reduced to 3 minutes (p < 0.0001). The time that an order was received by the pharmacy to the time it was verified by a pharmacist was reduced from 31 minutes before CPOE implementation to 3 minutes after CPOE implementation (p < 0.0001). The implementation of CPOE reduced the order-processing time (from order composition to verification) by 97%. Additionally, pharmacy-specific order-processing time (from order receipt in the pharmacy to pharmacist verification) was reduced by 90%. This reduction in order-processing time improves patient care by shortening the interval between physician prescribing and medication availability and may allow pharmacists to explore opportunities for enhanced clinical activities that will further positively impact patient care. CPOE implementation reduced the mean pharmacy order-processing time from composition to verification by 97%. After CPOE implementation, a new medication order was verified as appropriate by a pharmacist in three minutes, on average.
An algorithm to compute the sequency ordered Walsh transform
NASA Technical Reports Server (NTRS)
Larsen, H.
1976-01-01
A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.
Heffner, John E; Brower, Kathleen; Ellis, Rosemary; Brown, Shirley
2004-07-01
The high cost of computerized physician order entry (CPOE) and physician resistance to standardized care have delayed implementation. An intranet-based order set system can provide some of CPOE's benefits and offer opportunities to acculturate physicians toward standardized care. INTRANET CLINICIAN ORDER FORMS (COF): The COF system at the Medical University of South Carolina (MUSC) allows caregivers to enter and print orders through the intranet at points of care and to access decision support resources. Work on COF began in March 2000 with transfer of 25 MUSC paper-based order set forms to an intranet site. Physician groups developed additional order sets, which number more than 200. Web traffic increased progressively during a 24-month period, peaking at more than 6,400 hits per month to COF. Decision support tools improved compliance with Centers for Medicare & Medicaid Services core indicators. Clinicians demonstrated a willingness to develop and use order sets and decision support tools posted on the COF site. COF provides a low-cost method for preparing caregivers and institutions to adopt CPOE and standardization of care. The educational resources, relevant links to external resources, and communication alerts will all link to CPOE, thereby providing a head start in CPOE implementation.
Rotary wing aircraft and technical publications of NASA, 1970 - 1982
NASA Technical Reports Server (NTRS)
Hiemstra, J. D. (Compiler)
1982-01-01
This bibliography cites 933 documents in the NASA RECON data base which pertain to rotary wing aircraft. The entries are arranged in descending order by publication data except for the NASA-supported documents which are arranged in descending order by accession date.
Thermal Testing of Woven TPS Materials in Extreme Entry Environments
NASA Technical Reports Server (NTRS)
Gonzales, G.; Stackpoole, M.
2014-01-01
NASAs future robotic missions to Venus and outer planets, namely, Saturn, Uranus, Neptune, result in extremely high entry conditions that exceed the capabilities of current mid density ablators (PICA or Avcoat). Therefore mission planners assume the use of a fully dense carbon phenolic heatshield similar to what was flown on Pioneer Venus and Galileo. Carbon phenolic (CP) is a robust TPS however its high density and thermal conductivity constrain mission planners to steep entries, high heat fluxes, high pressures and short entry durations, in order for CP to be feasible from a mass perspective. In 2012 the Game Changing Development Program in NASAs Space Technology Mission Directorate funded NASA ARC to investigate the feasibility of a Woven Thermal Protection System to meet the needs of NASAs most challenging entry missions. The high entry conditions pose certification challenges in existing ground based test facilities. Recent updates to NASAs IHF and AEDCs H3 high temperature arcjet test facilities enable higher heatflux (2000 Wcm2) and high pressure (5 atm) testing of TPS. Some recent thermal tests of woven TPS will be discussed in this paper. These upgrades have provided a way to test higher entry conditions of potential outer planet and Venus missions and provided a baseline against carbon phenolic material. The results of these tests have given preliminary insight to sample configuration and physical recession profile characteristics.
Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M
2002-01-01
Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.
Development Of A Combined Sensor System For Atmospheric Entry Missions
NASA Astrophysics Data System (ADS)
Preci, A.; Eswein, N.; Herdrich, G.; Fasoulas, S.; Roser, H.-P.; Auweter-Kurtz, M.
2011-05-01
The payload COMPARE is developed at the Institute of Space Systems for various entry scenarios. It was previously laid out for a Mars entry mission and afterwards redesigned for the German Aerospace Centre suborbital re-entry mission SHEFEX II, which had its successful roll-out in July 2010 and is due to be launched in September 2011. The sensor system aims to simultaneously measure the temperature of the thermal protection shield, the radiation from the plasma and the pressure. The most recent development of COMPARE is a combined sensor system for ablative thermal protection systems enabling a separation of the radiative heat flux from the total heat flux. Furthermore, it enables also the detection of specific species in the plasma by measuring the radiative heat flux at a defined wavelength range. In the frame of an ESA funded project a breadboard has been build and tested in a plasma wind tunnel in order to prove the feasibility of such a sensor system for upcoming entry missions. Results of these measurements are presented in this work.
Trees, bialgebras and intrinsic numerical algorithms
NASA Technical Reports Server (NTRS)
Crouch, Peter; Grossman, Robert; Larson, Richard
1990-01-01
Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.
Power selective optical filter devices and optical systems using same
Koplow, Jeffrey P
2014-10-07
In an embodiment, a power selective optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes at least one substantially zero-order, zero-wave plate. The zero-order, zero-wave plate is configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. The zero-order, zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.
Marian, Anil A; Dexter, Franklin; Tucker, Peter; Todd, Michael M
2012-05-29
Anesthesia information management system (AIMS) records should be designed and configured to facilitate the accurate and prompt recording of multiple drugs administered coincidentally or in rapid succession. We proposed two touch-screen display formats for use with our department's new EPIC touch-screen AIMS. In one format, medication "buttons" were arranged in alphabetical order (i.e. A-C, D-H etc.). In the other, buttons were arranged in categories (Common, Fluids, Cardiovascular, Coagulation etc.). Both formats were modeled on an iPad screen to resemble the AIMS interface. Anesthesia residents, anesthesiologists, and Certified Registered Nurse Anesthetists (n = 60) were then asked to find and touch the correct buttons for a series of medications whose names were displayed to the side of the entry screen. The number of entries made within 2 minutes was recorded. This was done 3 times for each format, with the 1st format chosen randomly. Data were analyzed from the third trials with each format to minimize differences in learning. The categorical format had a mean of 5.6 more drugs entered using the categorical method in two minutes than the alphabetical format (95% confidence interval [CI] 4.5 to 6.8, P < 0.0001). The findings were the same regardless of the order of testing (i.e. alphabetical-categorical vs. categorical - alphabetical) and participants' years of clinical experience. Most anesthesia providers made no (0) errors for most trials (N = 96/120 trials, lower 95% limit 73%, P < 0.0001). There was no difference in error rates between the two formats (P = 0.53). The use of touch-screen user interfaces in healthcare is increasingly common. Arrangement of drugs names in a categorical display format in the medication order-entry touch screen of an AIMS can result in faster data entry compared to an alphabetical arrangement of drugs. Results of this quality improvement project were used in our department's design of our final intraoperative electronic anesthesia record. This testing approach using cognitive and usability engineering methods can be used to objectively design and evaluate many aspects of the clinician-computer interaction in electronic health records.
2012-01-01
Background Anesthesia information management system (AIMS) records should be designed and configured to facilitate the accurate and prompt recording of multiple drugs administered coincidentally or in rapid succession. Methods We proposed two touch-screen display formats for use with our department’s new EPIC touch-screen AIMS. In one format, medication “buttons” were arranged in alphabetical order (i.e. A-C, D-H etc.). In the other, buttons were arranged in categories (Common, Fluids, Cardiovascular, Coagulation etc.). Both formats were modeled on an iPad screen to resemble the AIMS interface. Anesthesia residents, anesthesiologists, and Certified Registered Nurse Anesthetists (n = 60) were then asked to find and touch the correct buttons for a series of medications whose names were displayed to the side of the entry screen. The number of entries made within 2 minutes was recorded. This was done 3 times for each format, with the 1st format chosen randomly. Data were analyzed from the third trials with each format to minimize differences in learning. Results The categorical format had a mean of 5.6 more drugs entered using the categorical method in two minutes than the alphabetical format (95% confidence interval [CI] 4.5 to 6.8, P < 0.0001). The findings were the same regardless of the order of testing (i.e. alphabetical-categorical vs. categorical - alphabetical) and participants’ years of clinical experience. Most anesthesia providers made no (0) errors for most trials (N = 96/120 trials, lower 95% limit 73%, P < 0.0001). There was no difference in error rates between the two formats (P = 0.53). Conclusions The use of touch-screen user interfaces in healthcare is increasingly common. Arrangement of drugs names in a categorical display format in the medication order-entry touch screen of an AIMS can result in faster data entry compared to an alphabetical arrangement of drugs. Results of this quality improvement project were used in our department’s design of our final intraoperative electronic anesthesia record. This testing approach using cognitive and usability engineering methods can be used to objectively design and evaluate many aspects of the clinician-computer interaction in electronic health records. PMID:22643058
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Jianyuan; Qin, Hong; Liu, Jian
2015-11-01
Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less
Ruef, M; Mendel, P; Scott, W R
1998-02-01
To draw together insights from three perspectives (health economics, organizational ecology, and institutional theory) in order to clarify the factors that influence entries of providers into healthcare markets. A model centered on the concept of an organizational field is advanced as the level of analysis best suited to examining the assortment and interdependence of organizational populations and the institutional forces that shape this co-evolution. In particular, the model argues that: (1) different populations of healthcare providers partition fiscal, geographic, and demographic resource environments in order to ameliorate competition and introduce service complementarities; and (2) competitive barriers to entry within populations of providers vary systematically with regulatory regimens. County-level entries of hospitals and home health agencies in the San Francisco Bay Area using data from the American Hospital Association (1945-1991) and California's Office of Statewide Health Planning and Development (1976-1991). Characteristics of the resource environment are derived from the Area Resource File (ARF) and selected government censuses. A comparative design is applied to contrast influences on hospital and home health agency entries during the post-World War II period. Empirical estimates are obtained using Poisson and negative binomial regression models. Hospital and HHA markets are partitioned primarily by the age and education of consumers and, to a lesser extent, by urbanization levels and public funding expenditures. Such resource partitioning allows independent HHAs to exist comfortably in concentrated hospital markets. For both hospitals and HHAs, the barriers to entry once generated by oligopolistic concentration have declined noticeably with the market-oriented reforms of the past 15 years. A field-level perspective demonstrates that characteristics of local resource environments interact with interdependencies of provider populations and broader regulatory regimes to affect significantly the types of provider organizations likely to enter a given healthcare market.
Ruef, M; Mendel, P; Scott, W R
1998-01-01
OBJECTIVE: To draw together insights from three perspectives (health economics, organizational ecology, and institutional theory) in order to clarify the factors that influence entries of providers into healthcare markets. A model centered on the concept of an organizational field is advanced as the level of analysis best suited to examining the assortment and interdependence of organizational populations and the institutional forces that shape this co-evolution. In particular, the model argues that: (1) different populations of healthcare providers partition fiscal, geographic, and demographic resource environments in order to ameliorate competition and introduce service complementarities; and (2) competitive barriers to entry within populations of providers vary systematically with regulatory regimens. DATA SOURCES: County-level entries of hospitals and home health agencies in the San Francisco Bay Area using data from the American Hospital Association (1945-1991) and California's Office of Statewide Health Planning and Development (1976-1991). Characteristics of the resource environment are derived from the Area Resource File (ARF) and selected government censuses. METHODS OF ANALYSIS: A comparative design is applied to contrast influences on hospital and home health agency entries during the post-World War II period. Empirical estimates are obtained using Poisson and negative binomial regression models. RESULTS: Hospital and HHA markets are partitioned primarily by the age and education of consumers and, to a lesser extent, by urbanization levels and public funding expenditures. Such resource partitioning allows independent HHAs to exist comfortably in concentrated hospital markets. For both hospitals and HHAs, the barriers to entry once generated by oligopolistic concentration have declined noticeably with the market-oriented reforms of the past 15 years. CONCLUSION: A field-level perspective demonstrates that characteristics of local resource environments interact with interdependencies of provider populations and broader regulatory regimes to affect significantly the types of provider organizations likely to enter a given healthcare market. PMID:9460486
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
..., as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms June 17, 2010. On... Hybrid System. Each rule currently provides allocation algorithms the Exchange can utilize when executing incoming electronic orders, including the Ultimate Matching Algorithm (``UMA''), and price-time and pro...
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
Devendran, Dharshi; Graves, Daniel; Johansen, Hans; ...
2017-05-08
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
A fourth-order Cartesian grid embeddedboundary method for Poisson’s equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devendran, Dharshi; Graves, Daniel; Johansen, Hans
In this paper, we present a fourth-order algorithm to solve Poisson's equation in two and three dimensions. We use a Cartesian grid, embedded boundary method to resolve complex boundaries. We use a weighted least squares algorithm to solve for our stencils. We use convergence tests to demonstrate accuracy and we show the eigenvalues of the operator to demonstrate stability. We compare accuracy and performance with an established second-order algorithm. We also discuss in depth strategies for retaining higher-order accuracy in the presence of nonsmooth geometries.
SDIA: A dynamic situation driven information fusion algorithm for cloud environment
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Tong; Wang, Jian
2017-09-01
Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.
Parameterized Complexity of k-Anonymity: Hardness and Tractability
NASA Astrophysics Data System (ADS)
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri
The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.
Water retention curve for hydrate-bearing sediments
NASA Astrophysics Data System (ADS)
Dai, Sheng; Santamarina, J. Carlos
2013-11-01
water retention curve plays a central role in numerical algorithms that model hydrate dissociation in sediments. The determination of the water retention curve for hydrate-bearing sediments faces experimental difficulties, and most studies assume constant water retention curves regardless of hydrate saturation. This study employs network model simulation to investigate the water retention curve for hydrate-bearing sediments. Results show that (1) hydrate in pores shifts the curve to higher capillary pressures and the air entry pressure increases as a power function of hydrate saturation; (2) the air entry pressure is lower in sediments with patchy rather than distributed hydrate, with higher pore size variation and pore connectivity or with lower specimen slenderness along the flow direction; and (3) smaller specimens render higher variance in computed water retention curves, especially at high water saturation Sw > 0.7. Results are relevant to other sediment pore processes such as bioclogging and mineral precipitation.
NASA automatic subject analysis technique for extracting retrievable multi-terms (NASA TERM) system
NASA Technical Reports Server (NTRS)
Kirschbaum, J.; Williamson, R. E.
1978-01-01
Current methods for information processing and retrieval used at the NASA Scientific and Technical Information Facility are reviewed. A more cost effective computer aided indexing system is proposed which automatically generates print terms (phrases) from the natural text. Satisfactory print terms can be generated in a primarily automatic manner to produce a thesaurus (NASA TERMS) which extends all the mappings presently applied by indexers, specifies the worth of each posting term in the thesaurus, and indicates the areas of use of the thesaurus entry phrase. These print terms enable the computer to determine which of several terms in a hierarchy is desirable and to differentiate ambiguous terms. Steps in the NASA TERMS algorithm are discussed and the processing of surrogate entry phrases is demonstrated using four previously manually indexed STAR abstracts for comparison. The simulation shows phrase isolation, text phrase reduction, NASA terms selection, and RECON display.
Computational Aerothermodynamics in Aeroassist Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.
Later-borns Don't Give Up: The Temporary Effects of Birth Order on European Earnings.
Bertoni, Marco; Brunello, Giorgio
2016-04-01
The existing empirical evidence on the effects of birth order on wages does not distinguish between temporary and permanent effects. Using data from 11 European countries for males born between 1935 and 1956, we show that firstborns enjoy on average a 13.7% premium in their entry wage compared with later-borns. This advantage, however, is short-lived and disappears 10 years after labor market entry. Although firstborns start with a better job, partially because of their higher education, later-borns quickly catch up by switching earlier and more frequently to better-paying jobs. We argue that a key factor driving our findings is that later-borns have lower risk aversion than firstborns.
75 FR 57061 - Public Land Order No. 7749; Extension of Public Land Order Nos. 6801 and 6812; Arizona
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-17
... National Forest System lands from location or entry under the United States mining laws (30 U.S.C. chapter... Service Coronado National Forest Office, Federal Building, 300 West Congress Street, Tucson, Arizona 85701.... Public Land Order No. 6801 (55 FR 38550, (1990)) that withdrew 61.356 acres of National Forest System...
ERIC Educational Resources Information Center
Komljenovic, Janja
2017-01-01
This paper focuses on market-making in the higher education sector and particularly on the role of the market ordering processes. The entry point to examine relations between market ordering and market-making is a private company called ICEF GmbH from Germany. ICEF is engaged in selling particular kinds of education services, delivered by…
NASA Technical Reports Server (NTRS)
Xu, Lu T.; Jaffe, Richard L.; Schwenke, David W.; Panesi, Marco
2017-01-01
Vibrationally excited CO2, formed by two-body recombination from CO((sup 1) sigma plus) and O((sup 3) P) in the wake behind spacecraft entering the Martian atmosphere reaction, is potentially responsible for the higher than anticipated radiative heating of the backshell, compared to pre-flight predictions. This process involves a spin-forbidden transition of the transient triplet CO2 molecule to the longer-lived singlet. To accurately predict the singlet-triplet transition probability and estimate the thermal rate coefficient of the recombination reaction, ab initio methods were used to compute the first singlet and three lowest triplet CO2 potential energy surfaces and the spin-orbit coupling matrix elements between these states. Analytical fits to these four potential energy surfaces were generated for surface hopping trajectory calculations, using Tully's fewest switches surface hopping algorithm. Preliminary results for the trajectory calculations are presented. The calculated probability of a CO((sup 1) sigma plus) and O((sup 3) P) collision leading to singlet CO2 formation is on the order of 10 (sup -4). The predicted flowfield conditions for various Mars entry scenarios predict temperatures in the range of 1000 degrees Kelvin - 4000 degrees Kelvin and pressures in the range of 300-2500 pascals at the shoulder and in the wake, which is consistent with a heavy-particle collision frequency of 10 (sup 6) to 10 (sup 7) per second. Owing to this low collision frequency, it is likely that CO((sup 1) sigma plus) molecules formed by this mechanism will mostly be frozen in a highly nonequilibrium rovibrational energy state until they relax by photoemission.
Carreras, Francisco Javier; Medina, Javier; Ruiz-Lozano, Mariola; Carreras, Ignacio; Castro, Juan Luis
2014-04-17
As part of a larger project on virtual tissue engineering of the optic pathways, we describe the conditions that guide axons extending from the retina to the optic nerve head and formulate algorithms that meet such conditions. To find the entrance site on the optic nerve head of each axon, we challenge the fibers to comply with current models of axonal pathfinding. First, we build a retinal map using a single type of retinal ganglion cell (RGC) using density functions from the literature. Dendritic arbors are equated to receptive fields. Shape and size of retinal surface and optic nerve head (ONH) are defined. A computer model relates each soma to the corresponding entry point of its axon into the optic disc. Weights are given to the heuristics that guide the preference entry order in the nerve. Retinal ganglion cells from the area centralis saturate the temporal section of the disc. Retinal ganglion cells temporal to the area centralis curve their paths surrounding the fovea; some of these cells enter the disc centrally rather than peripherally. Nasal regions of the disc receive mixed axons from the far periphery of the temporal hemiretina, together with axons from the nasal half. The model plots the course of the axon using Bezier curves and compares them with clinical data, for a coincidence level of 86% or higher. Our model is able to simulate basic data of the early optic pathways including certain singularities and to mimic mechanisms operating during development, such as timing and fasciculation. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Matrix computations in MACSYMA
NASA Technical Reports Server (NTRS)
Wang, P. S.
1977-01-01
Facilities built into MACSYMA for manipulating matrices with numeric or symbolic entries are described. Computations will be done exactly, keeping symbols as symbols. Topics discussed include how to form a matrix and create other matrices by transforming existing matrices within MACSYMA; arithmetic and other computation with matrices; and user control of computational processes through the use of optional variables. Two algorithms designed for sparse matrices are given. The computing times of several different ways to compute the determinant of a matrix are compared.
FIM Avionics Operations Manual
NASA Technical Reports Server (NTRS)
Alves, Erin E.
2017-01-01
This document describes the operation and use of the Flight Interval Management (FIM) Application installed on an electronic flight bag (EFB). Specifically, this document includes: 1) screen layouts for each page of the interface; 2) step-by-step instructions for data entry, data verification, and input error correction; 3) algorithm state messages and error condition alerting messages; 4) aircraft speed guidance and deviation indications; and 5) graphical display of the spatial relationships between the Ownship aircraft and the Target aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
High order multi-grid methods to solve the Poisson equation
NASA Technical Reports Server (NTRS)
Schaffer, S.
1981-01-01
High order multigrid methods based on finite difference discretization of the model problem are examined. The following methods are described: (1) a fixed high order FMG-FAS multigrid algorithm; (2) the high order methods; and (3) results are presented on four problems using each method with the same underlying fixed FMG-FAS algorithm.
Missing value imputation in DNA microarrays based on conjugate gradient method.
Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh
2012-02-01
Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
Net2Align: An Algorithm For Pairwise Global Alignment of Biological Networks
Wadhwab, Gulshan; Upadhyayaa, K. C.
2016-01-01
The amount of data on molecular interactions is growing at an enormous pace, whereas the progress of methods for analysing this data is still lacking behind. Particularly, in the area of comparative analysis of biological networks, where one wishes to explore the similarity between two biological networks, this holds a potential problem. In consideration that the functionality primarily runs at the network level, it advocates the need for robust comparison methods. In this paper, we describe Net2Align, an algorithm for pairwise global alignment that can perform node-to-node correspondences as well as edge-to-edge correspondences into consideration. The uniqueness of our algorithm is in the fact that it is also able to detect the type of interaction, which is essential in case of directed graphs. The existing algorithm is only able to identify the common nodes but not the common edges. Another striking feature of the algorithm is that it is able to remove duplicate entries in case of variable datasets being aligned. This is achieved through creation of a local database which helps exclude duplicate links. In a pervasive computational study on gene regulatory network, we establish that our algorithm surpasses its counterparts in its results. Net2Align has been implemented in Java 7 and the source code is available as supplementary files. PMID:28356678
Creation of a Book Order Management System Using a Microcomputer and a DBMS.
ERIC Educational Resources Information Center
Neill, Charlotte; And Others
1985-01-01
Describes management decisions and resultant technology-based system that allowed a medical library to meet increasing workloads without accompanying increases in resources available. Discussion covers system analysis; capabilities of book-order management system, "BOOKDIRT;" software and training; hardware; data files; data entry;…
78 FR 8220 - Actions Taken Pursuant to Executive Order 13382
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-05
... DEPARTMENT OF THE TREASURY Office of Foreign Assets Control Actions Taken Pursuant to Executive Order 13382 ACTION: Notice. SUMMARY: The Treasury Department's Office of Foreign Assets Control (``OFAC'') is announcing an update to the entry of an entity on OFAC's SDN List by adding an alias to the entity...
Warehouse stocking optimization based on dynamic ant colony genetic algorithm
NASA Astrophysics Data System (ADS)
Xiao, Xiaoxu
2018-04-01
In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
Departure Energies, Trip Times and Entry Speeds for Human Mars Missions
NASA Technical Reports Server (NTRS)
Munk, Michelle M.
1999-01-01
The study examines how the mission design variables departure energy, entry speed, and trip time vary for round-trip conjunction-class Mars missions. These three parameters must be balanced in order to produce a mission that is acceptable in terms of mass, cost, and risk. For the analysis, a simple, massless- planet trajectory program was employed. The premise of this work is that if the trans-Mars and trans-Earth injection stages are designed for the most stringent opportunity in the energy cycle, then there is extra energy capability in the "easier" opportunities which can be used to decrease the planetary entry speed, or shorten the trip time. Both of these effects are desirable for a human exploration program.
Departure Energies, Trip Times and Entry Speeds for Human Mars Missions
NASA Technical Reports Server (NTRS)
Munk, Michelle M.
1999-01-01
The study examines how the mission design variables departure energy, entry speed, and trip time vary for round-trip conjunction-class Mars missions. These three parameters must be balanced in order to produce a mission that is acceptable in terms of mass, cost, and risk. For the analysis, a simple, massless-planet trajectory program was employed. The premise of this work is that if the trans-Mars and trans-Earth injection stages are designed for the most stringent opportunity in the energy cycle, then there is extra energy capability in the "easier" opportunities which can be used to decrease the planetary entry speed, or shorten the trip time. Both of these effects are desirable for a human exploration program.
Micromechanical Characterization and Testing of Carbon Based Woven Thermal Protection Materials
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Pham, John T.; Arnold, James O.; Peterson, Keith; Venkatapathy, Ethiraj
2013-01-01
Woven thermal protection system (TPS) materials are one of the enabling technologies for mechanically deployable hypersonic decelerator systems. These materials can be simultaneously used for thermal protection and as structural load bearing members during the entry, descent and landing operations. In order to ensure successful thermal and structural performance during the atmospheric entry, it is important to characterize the properties of these materials, once they have been subjected to entry like conditions. The present paper focuses on mechanical characteristics of pre-and post arc-jet tested woven TPS samples at different scales. It also presents the observations from scanning electron microscope and computed tomography images, and explains the changes in microstructure after being subjected to combined thermal-mechanical loading environments.
Predicting Intentional Communication in Preverbal Preschoolers with Autism Spectrum Disorder.
Sandbank, Micheal; Woynaroski, Tiffany; Watson, Linda R; Gardner, Elizabeth; Keçeli Kaysili, Bahar; Yoder, Paul
2017-06-01
Intentional communication has previously been identified as a value-added predictor of expressive language in preverbal preschoolers with autism spectrum disorder. In the present study, we sought to identify value-added predictors of intentional communication. Of five theoretically-motivated putative predictors of intentional communication measured early in the study (at study entry and 4 months after), three had significant zero-order correlations with later intentional communication (12 months after study entry) and were thus added to a linear model that predicted later intentional communication scores controlling for initial intentional communication scores at study entry. After controlling for initial intentional communication, early motor imitation was the only predictor that accounted for a significant amount of variance in children's later intentional communication.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... compete with the algorithms that member firms and other market participants currently use to achieve VWAP... orders generated by market participants that may choose to use a competing algorithm. IV. Procedure... offer trading algorithms that would compete with other market participants would impose an undue burden...
An Automated Energy Detection Algorithm Based on Consecutive Mean Excision
2018-01-01
present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan
Recursive least-squares learning algorithms for neural networks
NASA Astrophysics Data System (ADS)
Lewis, Paul S.; Hwang, Jenq N.
1990-11-01
This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].
Fast algorithm for automatically computing Strahler stream order
Lanfear, Kenneth J.
1990-01-01
An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.
Mission Sizing and Trade Studies for Low Ballistic Coefficient Entry Systems to Venus
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Smith, Brandon; Prabhu, Dinesh; Venkatapathy, Ethiraj
2012-01-01
The U.S and the U.S.S.R. have sent seventeen successful atmospheric entry missions to Venus. Past missions to Venus have utilized rigid aeroshell systems for entry. This rigid aeroshell paradigm sets performance limitations since the size of the entry vehicle is constrained by the fairing diameter of the launch vehicle. This has limited ballistic coefficients (beta) to well above 100 kg/m2 for the entry vehicles. In order to maximize the science payload and minimize the Thermal Protection System (TPS) mass, these missions have entered at very steep entry flight path angles (gamma). Due to Venus thick atmosphere and the steep-gamma, high- conditions, these entry vehicles have been exposed to very high heat flux, very high pressures and extreme decelerations (upwards of 100 g's). Deployable aeroshells avoid the launch vehicle fairing diameter constraint by expanding to a larger diameter after the launch. Due to the potentially larger wetted area, deployable aeroshells achieve lower ballistic coefficients (well below 100 kg/m2), and if they are flown at shallower flight path angles, the entry vehicle can access trajectories with far lower decelerations (50-60 g's), peak heat fluxes (400 W/cm2) and peak pressures. The structural and TPS mass of the shallow-gamma, low-beta deployables are lower than their steep-gamma, high-beta rigid aeroshell counterparts at larger diameters, contributing to lower areal densities and potentially higher payload mass fractions. For example, at large diameters, deployables may attain aeroshell areal densities of 10 kg/m2 as opposed to 50 kg/m2 for rigid aeroshells. However, the low-beta, shallow-gamma paradigm also raises issues, such as the possibility of skip-out during entry. The shallow-gamma could also increase the landing footprint of the vehicle. Furthermore, the deployable entry systems may be flexible, so there could be fluid-structure interaction, especially in the high altitude, low-density regimes. The need for precision in guidance, navigation and control during entry also has to be better understood. This paper investigates some of the challenges facing the design of a shallow-gamma, low-beta entry system.
Improving serum calcium test ordering according to a decision algorithm.
Faria, Daniel K; Taniguchi, Leandro U; Fonseca, Luiz A M; Ferreira-Junior, Mario; Aguiar, Francisco J B; Lichtenstein, Arnaldo; Sumita, Nairo M; Duarte, Alberto J S; Sales, Maria M
2018-05-18
To detect differences in the pattern of serum calcium tests ordering before and after the implementation of a decision algorithm. We studied patients admitted to an internal medicine ward of a university hospital on April 2013 and April 2016. Patients were classified as critical or non-critical on the day when each test was performed. Adequacy of ordering was defined according to adherence to a decision algorithm implemented in 2014. Total and ionised calcium tests per patient-day of hospitalisation significantly decreased after the algorithm implementation; and duplication of tests (total and ionised calcium measured in the same blood sample) was reduced by 49%. Overall adequacy of ionised calcium determinations increased by 23% (P=0.0001) due to the increase in the adequacy of ionised calcium ordering in non-critical conditions. A decision algorithm can be a useful educational tool to improve adequacy of the process of ordering serum calcium tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2003-07-01
Solving systems of reaction-diffusion equations in three space dimensions can be prohibitively expensive both in terms of storage and CPU time. Herein, I present a new incomplete assembly procedure that is designed to reduce storage requirements. Incomplete assembly is analogous to incomplete factorization in that only a fixed number of nonzero entries are stored per row and a drop tolerance is used to discard small values. The algorithm is incorporated in a finite element method-of-lines code and tested on a set of reaction-diffusion systems. The effect of incomplete assembly on CPU time and storage and on the performance of the temporal integrator DASPK, algebraic solver GMRES and preconditioner ILUT is studied.
Simulator for heterogeneous dataflow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
1993-01-01
A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.
Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F
2011-03-01
This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.
Aerothermodynamics of Blunt Body Entry Vehicles. Chapter 3
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Borrelli, Salvatore
2011-01-01
In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of Computational Fluid Dynamics (CFD) code predictions.
Aerothermodynamics of blunt body entry vehicles
NASA Astrophysics Data System (ADS)
Hollis, Brian R.; Borrelli, Salvatore
2012-01-01
In this chapter, the aerothermodynamic phenomena of blunt body entry vehicles are discussed. Four topics will be considered that present challenges to current computational modeling techniques for blunt body environments: turbulent flow, non-equilibrium flow, rarefied flow, and radiation transport. Examples of comparisons between computational tools to ground and flight-test data will be presented in order to illustrate the challenges existing in the numerical modeling of each of these phenomena and to provide test cases for evaluation of computational fluid dynamics (CFD) code predictions.
Potential applications of skip SMV with thrust engine
NASA Astrophysics Data System (ADS)
Wang, Weilin; Savvaris, Al
2016-11-01
This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.
Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A
2001-05-01
The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.
El Fadly, AbdenNaji; Rance, Bastien; Lucas, Noël; Mead, Charles; Chatellier, Gilles; Lastic, Pierre-Yves; Jaulent, Marie-Christine; Daniel, Christel
2011-12-01
There are different approaches for repurposing clinical data collected in the Electronic Healthcare Record (EHR) for use in clinical research. Semantic integration of "siloed" applications across domain boundaries is the raison d'être of the standards-based profiles developed by the Integrating the Healthcare Enterprise (IHE) initiative - an initiative by healthcare professionals and industry promoting the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. In particular, the combination of two IHE profiles - the integration profile "Retrieve Form for Data Capture" (RFD), and the IHE content profile "Clinical Research Document" (CRD) - offers a straightforward approach to repurposing EHR data by enabling the pre-population of the case report forms (eCRF) used for clinical research data capture by Clinical Data Management Systems (CDMS) with previously collected EHR data. Implement an alternative solution of the RFD-CRD integration profile centered around two approaches: (i) Use of the EHR as the single-source data-entry and persistence point in order to ensure that all the clinical data for a given patient could be found in a single source irrespective of the data collection context, i.e. patient care or clinical research; and (ii) Maximize the automatic pre-population process through the use of a semantic interoperability services that identify duplicate or semantically-equivalent eCRF/EHR data elements as they were collected in the EHR context. The RE-USE architecture and associated profiles are focused on defining a set of scalable, standards-based, IHE-compliant profiles that can enable single-source data collection/entry and cross-system data reuse through semantic integration. Specifically, data reuse is realized through the semantic mapping of data collection fields in electronic Case Report Forms (eCRFs) to data elements previously defined as part of patient care-centric templates in the EHR context. The approach was evaluated in the context of a multi-center clinical trial conducted in a large, multi-disciplinary hospital with an installed EHR. Data elements of seven eCRFs used in a multi-center clinical trial were mapped to data elements of patient care-centric templates in use in the EHR at the George Pompidou hospital. 13.4% of the data elements of the eCRFs were found to be represented in EHR templates and were therefore candidate for pre-population. During the execution phase of the clinical study, the semantic mapping architecture enabled data persisted in the EHR context as part of clinical care to be used to pre-populate eCRFS for use without secondary data entry. To ensure that the pre-populated data is viable for use in the clinical research context, all pre-populated eCRF data needs to be first approved by a trial investigator prior to being persisted in a research data store within a CDMS. Single-source data entry in the clinical care context for use in the clinical research context - a process enabled through the use of the EHR as single point of data entry, can - if demonstrated to be a viable strategy - not only significantly reduce data collection efforts while simultaneously increasing data collection accuracy secondary to elimination of transcription or double-entry errors between the two contexts but also ensure that all the clinical data for a given patient, irrespective of the data collection context, are available in the EHR for decision support and treatment planning. The RE-USE approach used mapping algorithms to identify semantic coherence between clinical care and clinical research data elements and pre-populate eCRFs. The RE-USE project utilized SNOMED International v.3.5 as its "pivot reference terminology" to support EHR-to-eCRF mapping, a decision that likely enhanced the "recall" of the mapping algorithms. The RE-USE results demonstrate the difficult challenges involved in semantic integration between the clinical care and clinical research contexts. Copyright © 2011 Elsevier Inc. All rights reserved.
Development of Thermal Protection Materials for Future Mars Entry, Descent and Landing Systems
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Beck, Robin A. S.; Arnold, James O.; Hwang, Helen; Wright, Michael J.; Szalai, Christine E.; Blosser, Max; Poteet, Carl C.
2010-01-01
Entry Systems will play a crucial role as NASA develops the technologies required for Human Mars Exploration. The Exploration Technology Development Program Office established the Entry, Descent and Landing (EDL) Technology Development Project to develop Thermal Protection System (TPS) materials for insertion into future Mars Entry Systems. An assessment of current entry system technologies identified significant opportunity to improve the current state of the art in thermal protection materials in order to enable landing of heavy mass (40 mT) payloads. To accomplish this goal, the EDL Project has outlined a framework to define, develop and model the thermal protection system material concepts required to allow for the human exploration of Mars via aerocapture followed by entry. Two primary classes of ablative materials are being developed: rigid and flexible. The rigid ablatives will be applied to the acreage of a 10x30 m rigid mid L/D Aeroshell to endure the dual pulse heating (peak approx.500 W/sq cm). Likewise, flexible ablative materials are being developed for 20-30 m diameter deployable aerodynamic decelerator entry systems that could endure dual pulse heating (peak aprrox.120 W/sq cm). A technology Roadmap is presented that will be used for facilitating the maturation of both the rigid and flexible ablative materials through application of decision metrics (requirements, key performance parameters, TRL definitions, and evaluation criteria) used to assess and advance the various candidate TPS material technologies.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
Wolf, Matthew; Miller, Suzanne; DeJong, Doug; House, John A; Dirks, Carl; Beasley, Brent
2016-09-01
To establish a process for the development of a prioritization tool for a clinical decision support build within a computerized provider order entry system and concurrently to prioritize alerts for Saint Luke's Health System. The process of prioritizing clinical decision support alerts included (a) consensus sessions to establish a prioritization process and identify clinical decision support alerts through a modified Delphi process and (b) a clinical decision support survey to validate the results. All members of our health system's physician quality organization, Saint Luke's Care as well as clinicians, administrators, and pharmacy staff throughout Saint Luke's Health System, were invited to participate in this confidential survey. The consensus sessions yielded a prioritization process through alert contextualization and associated Likert-type scales. Utilizing this process, the clinical decision support survey polled the opinions of 850 clinicians with a 64.7 percent response rate. Three of the top rated alerts were approved for the pre-implementation build at Saint Luke's Health System: Acute Myocardial Infarction Core Measure Sets, Deep Vein Thrombosis Prophylaxis within 4 h, and Criteria for Sepsis. This study establishes a process for developing a prioritization tool for a clinical decision support build within a computerized provider order entry system that may be applicable to similar institutions. © The Author(s) 2015.
Efficiently Sorting Zoo-Mesh Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, R; Max, N; Silva, C
The authors describe the SXMPVO algorithm for performing a visibility ordering zoo-meshed polyhedra. The algorithm runs in practice in linear time and the visibility ordering which it produces is exact.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 5 U.S.C. 557, to review any final order of an Administrative Law Judge in accordance with the... request for administrative review within ten (10) days of the date of entry of the Administrative Law... Administrative Hearing Officer may review an Administrative Law Judge's final order on his or her own initiative...
Algorithms For Integrating Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida
2015-05-01
Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.
Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.
Yu, Zhang; Yang, Xiaomei
2013-01-01
By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
Style-independent document labeling: design and performance evaluation
NASA Astrophysics Data System (ADS)
Mao, Song; Kim, Jong Woo; Thoma, George R.
2003-12-01
The Medical Article Records System or MARS has been developed at the U.S. National Library of Medicine (NLM) for automated data entry of bibliographical information from medical journals into MEDLINE, the premier bibliographic citation database at NLM. Currently, a rule-based algorithm (called ZoneCzar) is used for labeling important bibliographical fields (title, author, affiliation, and abstract) on medical journal article page images. While rules have been created for medical journals with regular layout types, new rules have to be manually created for any input journals with arbitrary or new layout types. Therefore, it is of interest to label any journal articles independent of their layout styles. In this paper, we first describe a system (called ZoneMatch) for automated generation of crucial geometric and non-geometric features of important bibliographical fields based on string-matching and clustering techniques. The rule based algorithm is then modified to use these features to perform style-independent labeling. We then describe a performance evaluation method for quantitatively evaluating our algorithm and characterizing its error distributions. Experimental results show that the labeling performance of the rule-based algorithm is significantly improved when the generated features are used.
The Mars Science Laboratory Entry, Descent, and Landing Flight Software
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2013-01-01
This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.
1989-01-01
is represented by a number, called a Hounsfield Unit (HU), which represents the attenuation within the volume relative to the attenuation of the same...volume of water. Hounsfield Unit values range from -1000 to +3000, with a value of zero assigned to the attenuation of water. A HU value of -1000...represented by a 3D array. Each array element represents a single voxel, and the value of the array entry is the corresponding scaled Hounsfield Unit value
Ares I-X Management Office (MMO) Integrated Master Schedule (IMS)
NASA Technical Reports Server (NTRS)
Heintzman, Keith; Askins, Bruce
2010-01-01
Objectives: Demonstrate control of a dynamically similar, integrated Ares I/Orion, using Ares I relevant ascent control algorithms. Perform an in-flight separation/staging event between a Ares I-similar First Stage and a representative Upper Stage. Demonstrate assembly and recovery of a new Ares I-like First Stage element at KSC. Demonstrate First Stage separation sequencing, and quantify First Stage atmospheric entry dynamics, and parachute performance. Characterize magnitude of integrated vehicle roll torque throughout First Stage flight.
Human Mars Entry, Descent and Landing Architectures Study Overview
NASA Technical Reports Server (NTRS)
Polsgrove, Tara T.; Dwyer Cianciolo, Alicia
2016-01-01
Landing humans on Mars will require entry, descent and landing (EDL) capability beyond the current state of the art. Nearly twenty times more delivered payload and an order of magnitude improvement in precision landing capability will be necessary. Several EDL technologies capable of meeting the human class payload delivery requirements are being considered. The EDL technologies considered include low lift-to-drag vehicles like Hypersonic Inflatable Aerodynamic Decelerators (HIAD), Adaptable Deployable Entry and Placement Technology (ADEPT), and mid range lift-to-drag vehicles like rigid aeroshell configurations. To better assess EDL technology options and sensitivities to future human mission design variations, a series of design studies has been conducted. The design studies incorporate EDL technologies with conceptual payload arrangements defined by the Evolvable Mars Campaign to evaluate the integrated system with higher fidelity than have been performed to date. This paper describes the results of the design studies for a lander design using the HIAD, ADEPT and rigid shell entry technologies and includes system and subsystem design details including mass and power estimates. This paper will review the point design for three entry configurations capable of delivering a 20 t human class payload to the surface of Mars.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
NASA Video Catalog. Supplement 15
NASA Technical Reports Server (NTRS)
2005-01-01
This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.
NASA Video Catalog. Supplement 13
NASA Technical Reports Server (NTRS)
2003-01-01
This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.
NASA Video Catalog. Supplement 14
NASA Technical Reports Server (NTRS)
2004-01-01
This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.
NASA Technical Reports Server (NTRS)
2006-01-01
This issue of the NASA Video Catalog cites video productions listed in the NASA STI database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Subject Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.
77 FR 38006 - Approval and Promulgation of Implementation Plans; State of Iowa: Regional Haze
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-26
... Class I Areas'' contained one numerical error. Iowa's 2002 contribution to Voyagers should read 2.16... environmental effects, using practicable and legally permissible methods, under Executive Order 12898 (59 FR... a new entry (39) in numerical order to read as follows: Sec. 52.820 Identification of plan...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-26
... PHLX, Inc. Relating to Order Re-Entry July 20, 2010. Pursuant to section 19(b)(1) of the Securities... a system enhancement that automatically re-enters unexecuted contracts when, after trading at the..., upon the written instruction of the member that initially submitted the order, re-submit unexecuted...
40 CFR 52.1190 - Original Identification of plan section.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Plan for the General Motors Corporation Buick Motor Division in the form of an Alteration of... is in the form of a Stipulation for Entry of Consent Order and Final Order (No. 23-1984). The Consent... suspended particulates (TSP). The revision, in the form of Air Pollution Control Act (APCA) No. 65, revises...
40 CFR 52.1190 - Original Identification of plan section.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Plan for the General Motors Corporation Buick Motor Division in the form of an Alteration of... is in the form of a Stipulation for Entry of Consent Order and Final Order (No. 23-1984). The Consent... suspended particulates (TSP). The revision, in the form of Air Pollution Control Act (APCA) No. 65, revises...
40 CFR 52.1190 - Original Identification of plan section.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Plan for the General Motors Corporation Buick Motor Division in the form of an Alteration of... is in the form of a Stipulation for Entry of Consent Order and Final Order (No. 23-1984). The Consent... suspended particulates (TSP). The revision, in the form of Air Pollution Control Act (APCA) No. 65, revises...
40 CFR 52.1190 - Original Identification of plan section.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Plan for the General Motors Corporation Buick Motor Division in the form of an Alteration of... is in the form of a Stipulation for Entry of Consent Order and Final Order (No. 23-1984). The Consent... suspended particulates (TSP). The revision, in the form of Air Pollution Control Act (APCA) No. 65, revises...
40 CFR 52.1190 - Original Identification of plan section.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Plan for the General Motors Corporation Buick Motor Division in the form of an Alteration of... is in the form of a Stipulation for Entry of Consent Order and Final Order (No. 23-1984). The Consent... suspended particulates (TSP). The revision, in the form of Air Pollution Control Act (APCA) No. 65, revises...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-11
... Quote Traders and Remote Streaming Quote Traders Entering Certain Option Day Limit Orders August 5, 2011... allow entry of day limit orders for the proprietary accounts of Streaming Quote Traders and Remote... proprietary accounts of Streaming Quote Traders (SQTs'') and Remote Streaming Quote Traders (``RSQTs''). The...
The Approximation of Two-Mode Proximity Matrices by Sums of Order-Constrained Matrices.
ERIC Educational Resources Information Center
Hubert, Lawrence; Arabie, Phipps
1995-01-01
A least-squares strategy is proposed for representing a two-mode proximity matrix as an approximate sum of a small number of matrices that satisfy certain simple order constraints on their entries. The primary class of constraints considered defines Q-forms for particular conditions in a two-mode matrix. (SLD)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
...-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate Effectiveness of... is hereby given that on May 22, 2012, The NASDAQ Stock Market LLC (``NASDAQ'' or ``Exchange'') filed... Order Fee,\\3\\ aimed at reducing inefficient order entry practices of certain market participants that...
19 CFR 141.111 - Carrier's release order.
Code of Federal Regulations, 2013 CFR
2013-04-01
... given to release the articles covered by this certified duplicate bill of lading or air waybill to: (c) Blanket release order. Merchandise may be released to the person named in the bill of lading or air...)(4); or (4) If a certified duplicate bill of lading or air waybill is used for entry purposes in...
19 CFR 141.111 - Carrier's release order.
Code of Federal Regulations, 2014 CFR
2014-04-01
... given to release the articles covered by this certified duplicate bill of lading or air waybill to: (c) Blanket release order. Merchandise may be released to the person named in the bill of lading or air...)(4); or (4) If a certified duplicate bill of lading or air waybill is used for entry purposes in...
19 CFR 141.111 - Carrier's release order.
Code of Federal Regulations, 2010 CFR
2010-04-01
... given to release the articles covered by this certified duplicate bill of lading or air waybill to: (c) Blanket release order. Merchandise may be released to the person named in the bill of lading or air...)(4); or (4) If a certified duplicate bill of lading or air waybill is used for entry purposes in...
19 CFR 141.111 - Carrier's release order.
Code of Federal Regulations, 2012 CFR
2012-04-01
... given to release the articles covered by this certified duplicate bill of lading or air waybill to: (c) Blanket release order. Merchandise may be released to the person named in the bill of lading or air...)(4); or (4) If a certified duplicate bill of lading or air waybill is used for entry purposes in...
19 CFR 141.111 - Carrier's release order.
Code of Federal Regulations, 2011 CFR
2011-04-01
... given to release the articles covered by this certified duplicate bill of lading or air waybill to: (c) Blanket release order. Merchandise may be released to the person named in the bill of lading or air...)(4); or (4) If a certified duplicate bill of lading or air waybill is used for entry purposes in...
NASA Astrophysics Data System (ADS)
Poulter, Benjamin; Goodall, Jonathan L.; Halpin, Patrick N.
2008-08-01
SummaryThe vulnerability of coastal landscapes to sea level rise is compounded by the existence of extensive artificial drainage networks initially built to lower water tables for agriculture, forestry, and human settlements. These drainage networks are found in landscapes with little topographic relief where channel flow is characterized by bi-directional movement across multiple time-scales and related to precipitation, wind, and tidal patterns. The current configuration of many artificial drainage networks exacerbates impacts associated with sea level rise such as salt-intrusion and increased flooding. This suggests that in the short-term, drainage networks might be managed to mitigate sea level rise related impacts. The challenge, however, is that hydrologic processes in regions where channel flow direction is weakly related to slope and topography require extensive parameterization for numerical models which is limited where network size is on the order of a hundred or more kilometers in total length. Here we present an application of graph theoretic algorithms to efficiently investigate network properties relevant to the management of a large artificial drainage system in coastal North Carolina, USA. We created a digital network model representing the observation network topology and four types of drainage features (canal, collector and field ditches, and streams). We applied betweenness-centrality concepts (using Dijkstra's shortest path algorithm) to determine major hydrologic flowpaths based off of hydraulic resistance. Following this, we identified sub-networks that could be managed independently using a community structure and modularity approach. Lastly, a betweenness-centrality algorithm was applied to identify major shoreline entry points to the network that disproportionately control water movement in and out of the network. We demonstrate that graph theory can be applied to solving management and monitoring problems associated with sea level rise for poorly understood drainage networks in advance of numerical methods.