Sample records for summarization algorithms plier

  1. One set of pliers for more tasks in installation work: the effects on (dis)comfort and productivity.

    PubMed

    Groenesteijn, Liesbeth; Eikhout, Sandra M; Vink, Peter

    2004-09-01

    In installation work, the physical workload is high. Awkward postures, heavy lifting and repetitive movements are often seen. To improve aspects of the work situation, frequently used pliers were redesigned to make them suitable for more cutting tasks. In this study these multitask pliers are evaluated in comparison to the originally used pliers in a field study and a laboratory study. For the field study 26 subjects participated divided into two groups according to their type of work. Ten subjects participated in the laboratory study. The multitask plier appeared to result in more comfort during working, more relaxed working and more satisfaction. No differences in productivity were found. In conclusion, the multitask pliers can replace the originally used pliers and are suitable for more tasks than the original pliers. The installation workers have to carry less pliers by using the multitask pliers.

  2. Tubing crimping pliers

    DOEpatents

    Lindholm, G.T.

    1981-02-27

    The disclosure relates to pliers and more particularly to pliers for crimping two or more pieces of copper tubing together prior to their being permanently joined by brazing, soldering or the like. A die containing spring-loaded pins rotates within a cammed ring in the head of the pliers. As the die rotates, the pins force a crimp on tubing held within the pliers.

  3. An ergonomic evaluation of manual Cleco plier designs: effects of rubber grip, spring recoil, and worksurface angle.

    PubMed

    You, Heecheon; Kumar, Anil; Young, Ronda; Veluswamy, Prabaharan; Malzahn, Don E

    2005-09-01

    The present study evaluated two design modifications (rubber grip and torsion spring) to the conventional manual Cleco pliers by electromyography (EMG), hand discomfort, and design satisfaction. This study also surveyed workers' satisfaction with selected design features of the pliers for ergonomic improvement. A two-way (plier design x worksurface angle) within-subject (nested within gender and hand size) design was employed. Eleven workers simulated the plier task in an adjustable workstation for different plier designs and worksurface angles (0 degrees , 60 degrees , and 90 degrees ). Lower EMG values were obtained for the pliers with rubber grip and at 60 degrees of worksurface angle. EMG values varied significantly between the participants, but showed low correlations (Spearman's rank correlation = -0.27 approximately -0.58) with their work experience with the pliers. The hand discomfort and design satisfaction evaluations identified that the grip span (max = 14.0 cm) and grip force requirement (peak = 220.5 N) of the current pliers need ergonomic modification. The present study shows the needs of both the ergonomic design of a hand tool and the training of a proper work method to control work-related musculoskeletal disorders at the workplace.

  4. When pliers become fingers in the monkey motor system

    PubMed Central

    Umiltà, M. A.; Escola, L.; Intskirveli, I.; Grammont, F.; Rochat, M.; Caruana, F.; Jezzini, A.; Gallese, V.; Rizzolatti, G.

    2008-01-01

    The capacity to use tools is a fundamental evolutionary achievement. Its essence stands in the capacity to transfer a proximal goal (grasp a tool) to a distal goal (e.g., grasp food). Where and how does this goal transfer occur? Here, we show that, in monkeys trained to use tools, cortical motor neurons, active during hand grasping, also become active during grasping with pliers, as if the pliers were now the hand fingers. This motor embodiment occurs both for normal pliers and for “reverse pliers,” an implement that requires finger opening, instead of their closing, to grasp an object. We conclude that the capacity to use tools is based on an inherently goal-centered functional organization of primate cortical motor areas. PMID:18238904

  5. The effects of ion implantation on the beaks of orthodontic pliers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizrahi, E.; Cleaton-Jones, P.E.; Luyckz, S.

    1991-06-01

    The surface of stainless steel may be hardened by bombarding the material with a stream of nitrogen ions generated by a nuclear accelerator. In the present study this technique was used to determine the hardening effect of ion implantation on the beaks of stainless steel orthodontic pliers. Ten orthodontic pliers (Dentarum 003 094) were divided into two equal groups, designated control and experimental. The beaks of the experimental pliers were subjected to ion implantation, after which the tips of the beaks of all the pliers were stressed in an apparatus attached to an Instron testing machine. A cyclical load ofmore » 500 N was applied to the handles of the pliers, while a 0.9 mm (0.036 inch) round, stainless steel wire was held between the tips of the beaks. The effect of the stress was assessed by measurement with a traveling microscope of the gap produced between the tips of the beaks. Measurements were taken before loading and after 20, 40, 60, and 80 cycles. Statistical analysis of variance and the two-sample t tests indicated that there was a significant increase in the size of the gap as the pliers were stressed from 0 to 80 cycles (p less than 0.001). Furthermore, the mean gap was significantly greater in the control group than in the experimental group (p less than 0.001). This study suggests that ion implantation increases the hardness of the tips of the beaks of orthodontic pliers.« less

  6. A review of sterilization, packaging and storage considerations for orthodontic pliers.

    PubMed

    Papaioannou, Angeliki

    2013-01-01

    Wrapping dental instruments along with a chemical indicator is considered an essential step of a reliable infection control protocol. Hinged instruments, such as orthodontic pliers, are particular because they must be sterilized in an open position. Different methods to sterilize, package and store orthodontic pliers are reviewed and discussed.

  7. Nanomechanical DNA origami pH sensors.

    PubMed

    Kuzuya, Akinori; Watanabe, Ryosuke; Yamanaka, Yusei; Tamaki, Takuya; Kaino, Masafumi; Ohya, Yuichi

    2014-10-16

    Single-molecule pH sensors have been developed by utilizing molecular imaging of pH-responsive shape transition of nanomechanical DNA origami devices with atomic force microscopy (AFM). Short DNA fragments that can form i-motifs were introduced to nanomechanical DNA origami devices with pliers-like shape (DNA Origami Pliers), which consist of two levers of 170-nm long and 20-nm wide connected at a Holliday-junction fulcrum. DNA Origami Pliers can be observed as in three distinct forms; cross, antiparallel and parallel forms, and cross form is the dominant species when no additional interaction is introduced to DNA Origami Pliers. Introduction of nine pairs of 12-mer sequence (5'-AACCCCAACCCC-3'), which dimerize into i-motif quadruplexes upon protonation of cytosine, drives transition of DNA Origami Pliers from open cross form into closed parallel form under acidic conditions. Such pH-dependent transition was clearly imaged on mica in molecular resolution by AFM, showing potential application of the system to single-molecular pH sensors.

  8. [The clinical effect and disquisition of anterior cervical approach surgery with posterior longitudinal ligament hook pliers and posterior longitudinal ligament nip pliers].

    PubMed

    Kuang, Ling-hao; Xu, Dong; Sun, Ya-wei; Cong, Jie; Tian, Ji-wei; Wang, Lei

    2010-09-21

    To study the clinical effect of anterior cervical approach surgery to removal posterior longitudinal ligament (PLL) with posterior longitudinal ligament hook pliers and posterior longitudinal ligament nip pliers. To retrospectively analyzed anterior cervical approach surgery treatment 73 patients who were cervical spondylosis myelopathy. All patients removal PLL with self-make instrument, According to JOA grade to evaluate effect of operations. Full patients removal PLL were in succeed, in shape of extradural has renew, the JOA grade were increase, (12.8 ± 3.2) vs (8.3 ± 1.9). Removal PLL were increase effect of downright decompress in anterior cervical approach surgery, Operations become safety agile and reduce the complications with self-make instrument.

  9. Objective forensic analysis of striated, quasi-striated and impressed toolmarks

    NASA Astrophysics Data System (ADS)

    Spotts, Ryan E.

    Following the 1993 Daubert v. Merrell Dow Pharmaceuticals, Inc. court case and continuing to the 2010 National Academy of Sciences report, comparative forensic toolmark examination has received many challenges to its admissibility in court cases and its scientific foundations. Many of these challenges deal with the subjective nature in determining whether toolmarks are identifiable. This questioning of current identification methods has created a demand for objective methods of identification - "objective" implying known error rates and statistically reliability. The demand for objective methods has resulted in research that created a statistical algorithm capable of comparing toolmarks to determine their statistical similarity, and thus the ability to separate matching and nonmatching toolmarks. This was expanded to the creation of virtual toolmarking (characterization of a tool to predict the toolmark it will create). The statistical algorithm, originally designed for two-dimensional striated toolmarks, had been successfully applied to striated screwdriver and quasi-striated plier toolmarks. Following this success, a blind study was conducted to validate the virtual toolmarking capability using striated screwdriver marks created at various angles of incidence. Work was also performed to optimize the statistical algorithm by implementing means to ensure the algorithm operations were constrained to logical comparison regions (e.g. the opposite ends of two toolmarks do not need to be compared because they do not coincide with each other). This work was performed on quasi-striated shear cut marks made with pliers - a previously tested, more difficult application of the statistical algorithm that could demonstrate the difference in results due to optimization. The final research conducted was performed with pseudostriated impression toolmarks made with chisels. Impression marks, which are more complex than striated marks, were analyzed using the algorithm to separate matching and nonmatching toolmarks. Results of the conducted research are presented as well as evidence of the primary assumption of forensic toolmark examination; all tools can create identifiably unique toolmarks.

  10. 21 CFR 876.5540 - Blood access device and accessories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... is part of an artificial kidney system for the treatment of patients with renal failure or toxemic... dilator, disconnect forceps, shunt guard, crimp plier, tube plier, crimp ring, joint ring, fistula adaptor...

  11. 21 CFR 876.5540 - Blood access device and accessories.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... is part of an artificial kidney system for the treatment of patients with renal failure or toxemic... dilator, disconnect forceps, shunt guard, crimp plier, tube plier, crimp ring, joint ring, fistula adaptor...

  12. 21 CFR 876.5540 - Blood access device and accessories.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... is part of an artificial kidney system for the treatment of patients with renal failure or toxemic... dilator, disconnect forceps, shunt guard, crimp plier, tube plier, crimp ring, joint ring, fistula adaptor...

  13. 21 CFR 876.5540 - Blood access device and accessories.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... is part of an artificial kidney system for the treatment of patients with renal failure or toxemic... dilator, disconnect forceps, shunt guard, crimp plier, tube plier, crimp ring, joint ring, fistula adaptor...

  14. 21 CFR 876.5540 - Blood access device and accessories.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... is part of an artificial kidney system for the treatment of patients with renal failure or toxemic... dilator, disconnect forceps, shunt guard, crimp plier, tube plier, crimp ring, joint ring, fistula adaptor...

  15. A concept for universal pliers

    NASA Technical Reports Server (NTRS)

    Neal, E. T.

    1972-01-01

    By modification in existing design, pliers can be made to have one pair of handles that will accept number of different jaws. Concept is useful for light to medium duty service. Complete set of jaws may be made to suit specific hobbies or applications.

  16. Special pliers connect hose containing liquid under pressure

    NASA Technical Reports Server (NTRS)

    Blaydes, R. A.

    1964-01-01

    For speed and safety in handling disconnect fittings on a hose carrying liquid under pressure, special pliers have been constructed. A gear and rack mechanism is combined with two or more wide-opening U-shaped jaws which are placed over the quick-disconnect fittings.

  17. Objective analysis of impressed chisel toolmarks

    DOE PAGES

    Spotts, Ryan; Chumbley, L. Scott

    2015-08-06

    Historical and recent challenges to the practice of comparative forensic examination have created a driving force for the formation of objective methods for toolmark identification. In this study, fifty sequentially manufactured chisels were used to create impression toolmarks in lead (500 toolmarks total). An algorithm previously used to statistically separate known matching and nonmatching striated screwdriver marks and quasi-striated plier marks was used to evaluate the chisel marks. Impression toolmarks, a more complex form of toolmark, pose a more difficult test for the algorithm that was originally designed for striated toolmarks. Lastly, results show in this instance that the algorithmmore » can separate matching and nonmatching impression marks, providing further validation of the assumption that toolmarks are identifiably unique.« less

  18. Inline Electrical Connector Mate/Demate Pliers

    NASA Technical Reports Server (NTRS)

    Yutko, Brian; Dininny, Michael; Moscoso, Gerand; Dokos, Adam

    2010-01-01

    Military and aerospace industries use Mil-Spec type electrical connections on bulkhead panels that require inline access for mate and demate operations. These connectors are usually in tight proximity to other connectors, or recessed within panels. The pliers described here have been designed to work in such tight spaces, and consist of a mirrored set of parallel handles, two cross links, two return springs, and replaceable polyurethane-coated end effectors. The polyurethane eliminates metal-to-metal contact and provides a high-friction surface between the jaw and the connector. Operationally, the user would slide the pliers over the connector shell until the molded polyurethane lip makes contact with the connector shell edge. Then, by squeezing the handles, the end effector jaws grip the connector shell, allowing the connector to be easily disconnected by rotating the pliers. Mating the connector occurs by reversing the prescribed procedure, except the connector shell is placed into the jaws by hand. The molded lip within the jaw allows the user to apply additional force for difficult-to-mate connectors. Handle design has been carefully examined to maximize comfort, limit weight, incorporate tether locations, and improve ergonomics. They have been designed with an off-axis offset for wiring harness clearance, while placing the connector axis of rotation close to the user s axis of wrist rotation. This was done to eliminate fatigue during multiple connector panel servicing. To limit handle opening width, with user ergonomics in mind, the pliers were designed using a parallel jaw mechanism. A cross-link mechanism was used to complete this task, while ensuring smooth operation.

  19. Comparison of the Debonding Characteristics of Conventional and New Debonding Instrument used for Ceramic, Composite and Metallic Brackets - An Invitro Study.

    PubMed

    Choudhary, Garima; Gill, Vikas; Reddy, Y N N; Sanadhya, Sudhanshu; Aapaliya, Pankaj; Sharma, Nidhi

    2014-07-01

    Debonding procedure is time consuming and damaging to the enamel if performed with improper technique. Various debonding methods include: the conventional methods that use pliers or wrenches, an ultrasonic method, electrothermal devices, air pressure impulse devices, diamond burs to grind the brackets off the tooth surface and lasers. Among all these methods, using debonding pliers is most convenient and effective method but has been reported to cause damage to the teeth. Recently, a New Debonding Instrument designed specifically for ceramic and composite brackets has been introduced. As this is a new instrument, little information is available on efficacy of this instrument. The purpose of this study was to evaluate the debonding characteristics of both "the conventional debonding Pliers" and "the New debonding instrument" when removing ceramic, composite and metallic brackets. One Hundred Thirty eight extracted maxillary premolar teeth were collected and divided into two Groups: Group A and Group B (n = 69) respectively. They were further divided into 3 subGroups (n = 23) each according to the types of brackets to be bonded. In subGroups A1 and B1{stainless steel};A2 and B2{ceramic};A3 and B3{composite}adhesive precoated maxillary premolar brackets were used. Among them {ceramic and composite} adhesive pre-coated maxillary premolar brackets were bonded. All the teeth were etched using 37% phosphoric acid for 15 seconds and the brackets were bonded using Transbond XT primer. Brackets were debonded using Conventional Debonding Plier and New Debonding Instrument (Group B). After debonding, the enamel surface of each tooth was examined under stereo microscope (10X magnifications). Amodifiedadhesive remnant index (ARI) was used to quantify the amount of remaining adhesive on each tooth. The observations demonstrate that the results of New Debonding Instrument for debonding of metal, ceramic and composite brackets were statistically significantly different (p = 0.04) and superior from the results of conventional debonding Pliers. The debonding efficiency of New Debonding Instrument is better than the debonding efficiency of Conventional Debonding Pliers for use of metal, ceramic and composite brackets respectively.

  20. Gemini-Titan - Prelaunch

    NASA Image and Video Library

    1966-07-18

    S66-42738 (18 July 1966) --- Astronaut John W. Young, Gemini-10 command pilot, holds a pair of king-size pliers presented to him by the crew at Pad 19 for in-flight first-echelon maintenance of a spacecraft utility power cord Young earlier had difficulty in connecting. Gunther Wendt (right center background), Pad 19 leader, jokes with Young about the pliers. At right is Dr. Donald K. Slayton, MSC Director of Flight Crew Operations. At left is astronaut Michael Collins, Gemini-10 pilot. Photo credit: NASA

  1. Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.

    PubMed

    Kim, Dae-Min; Kong, Yong-Ku

    2016-12-01

    A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.

  2. Responses of mirror neurons in area F5 to hand and tool grasping observation

    PubMed Central

    Rochat, Magali J.; Caruana, Fausto; Jezzini, Ahmad; Escola, Ludovic; Intskirveli, Irakli; Grammont, Franck; Gallese, Vittorio; Rizzolatti, Giacomo

    2010-01-01

    Mirror neurons are a distinct class of neurons that discharge both during the execution of a motor act and during observation of the same or similar motor act performed by another individual. However, the extent to which mirror neurons coding a motor act with a specific goal (e.g., grasping) might also respond to the observation of a motor act having the same goal, but achieved with artificial effectors, is not yet established. In the present study, we addressed this issue by recording mirror neurons from the ventral premotor cortex (area F5) of two monkeys trained to grasp objects with pliers. Neuron activity was recorded during the observation and execution of grasping performed with the hand, with pliers and during observation of an experimenter spearing food with a stick. The results showed that virtually all neurons responding to the observation of hand grasping also responded to the observation of grasping with pliers and, many of them to the observation of spearing with a stick. However, the intensity and pattern of the response differed among conditions. Hand grasping observation determined the earliest and the strongest discharge, while pliers grasping and spearing observation triggered weaker responses at longer latencies. We conclude that F5 grasping mirror neurons respond to the observation of a family of stimuli leading to the same goal. However, the response pattern depends upon the similarity between the observed motor act and the one executed by the hand, the natural motor template. PMID:20577726

  3. Comparison of the Debonding Characteristics of Conventional and New Debonding Instrument used for Ceramic, Composite and Metallic Brackets – An Invitro Study

    PubMed Central

    Gill, Vikas; Reddy, Y. N. N.; Sanadhya, Sudhanshu; Aapaliya, Pankaj; Sharma, Nidhi

    2014-01-01

    Background: Debonding procedure is time consuming and damaging to the enamel if performed with improper technique. Various debonding methods include: the conventional methods that use pliers or wrenches, an ultrasonic method, electrothermal devices, air pressure impulse devices, diamond burs to grind the brackets off the tooth surface and lasers. Among all these methods, using debonding pliers is most convenient and effective method but has been reported to cause damage to the teeth. Recently, a New Debonding Instrument designed specifically for ceramic and composite brackets has been introduced. As this is a new instrument, little information is available on efficacy of this instrument. The purpose of this study was to evaluate the debonding characteristics of both “the conventional debonding Pliers” and “the New debonding instrument” when removing ceramic, composite and metallic brackets. Materials and Methods: One Hundred Thirty eight extracted maxillary premolar teeth were collected and divided into two Groups: Group A and Group B (n = 69) respectively. They were further divided into 3 subGroups (n = 23) each according to the types of brackets to be bonded. In subGroups A1 and B1{stainless steel};A2 and B2{ceramic};A3 and B3{composite}adhesive precoated maxillary premolar brackets were used. Among them {ceramic and composite} adhesive pre-coated maxillary premolar brackets were bonded. All the teeth were etched using 37% phosphoric acid for 15 seconds and the brackets were bonded using Transbond XT primer. Brackets were debonded using Conventional Debonding Plier and New Debonding Instrument (Group B). After debonding, the enamel surface of each tooth was examined under stereo microscope (10X magnifications). Amodifiedadhesive remnant index (ARI) was used to quantify the amount of remaining adhesive on each tooth. Results: The observations demonstrate that the results of New Debonding Instrument for debonding of metal, ceramic and composite brackets were statistically significantly different (p = 0.04) and superior from the results of conventional debonding Pliers. Conclusion: The debonding efficiency of New Debonding Instrument is better than the debonding efficiency of Conventional Debonding Pliers for use of metal, ceramic and composite brackets respectively. PMID:25177639

  4. Fishhook removal

    MedlinePlus

    ... such as redness, swelling, pain, or drainage. Wire cutting method: First, wash your hands with soap and ... is casting. Keep electrician's pliers with a wire-cutting blade and disinfecting solution in your tackle box. ...

  5. Objective analysis of toolmarks in forensics

    NASA Astrophysics Data System (ADS)

    Grieve, Taylor N.

    Since the 1993 court case of Daubert v. Merrell Dow Pharmaceuticals, Inc. the subjective nature of toolmark comparison has been questioned by attorneys and law enforcement agencies alike. This has led to an increased drive to establish objective comparison techniques with known error rates, much like those that DNA analysis is able to provide. This push has created research in which the 3-D surface profile of two different marks are characterized and the marks' cross-sections are run through a comparative statistical algorithm to acquire a value that is intended to indicate the likelihood of a match between the marks. The aforementioned algorithm has been developed and extensively tested through comparison of evenly striated marks made by screwdrivers. However, this algorithm has yet to be applied to quasi-striated marks such as those made by the shear edge of slip-joint pliers. The results of this algorithm's application to the surface of copper wire will be presented. Objective mark comparison also extends to comparison of toolmarks made by firearms. In an effort to create objective comparisons, microstamping of firing pins and breech faces has been introduced. This process involves placing unique alphanumeric identifiers surrounded by a radial code on the surface of firing pins, which transfer to the cartridge's primer upon firing. Three different guns equipped with microstamped firing pins were used to fire 3000 cartridges. These cartridges are evaluated based on the clarity of their alphanumeric transfers and the clarity of the radial code surrounding the alphanumerics.

  6. Automatic Text Summarization for Indonesian Language Using TextTeaser

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Pasaribu, A.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    Text summarization is one of the solution for information overload. Reducing text without losing the meaning not only can save time to read, but also maintain the reader’s understanding. One of many algorithms to summarize text is TextTeaser. Originally, this algorithm is intended to be used for text in English. However, due to TextTeaser algorithm does not consider the meaning of the text, we implement this algorithm for text in Indonesian language. This algorithm calculates four elements, such as title feature, sentence length, sentence position and keyword frequency. We utilize TextRank, an unsupervised and language independent text summarization algorithm, to evaluate the summarized text yielded by TextTeaser. The result shows that the TextTeaser algorithm needs more improvement to obtain better accuracy.

  7. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    PubMed Central

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  8. Tool For Installation Of Seal In Tube Fitting

    NASA Technical Reports Server (NTRS)

    Trevathan, Joseph R.

    1993-01-01

    Plierslike tool helps secure repair seal in fitting. Tool crimps repair seal into tube fitting, ensuring tight fit every time. Modified pair of snapring pliers to which knife-edge jaws have been added. Spring added between handles. Also includes separate, accompanying support ring.

  9. Software Product Liability

    DTIC Science & Technology

    1993-08-01

    disclaimers should be a top priority. Contract law involves the Uniform Commercial Code (UCC). This is an agreement between all the states (except...to contract law than this, the basic issue with software is that the sup- plier is generally an expert on an arcane and sophisticated technology and

  10. The Use of Physarum for Testing of Toxicity/Mutagenicity

    DTIC Science & Technology

    1984-04-19

    grade and sup- pliers were as follows: ethanol, U.S. Industrial Co.; hydrazine dihydrochloride Fisher Chemical Co.; hydrocarbons, Alltech Co. and Theta...procedure had its own particular advantages and limitations. The microplasmodial growth inhibition system (Becker et al., 1963) was convenient because it

  11. Objective analysis of toolmarks in forensics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grieve, Taylor N.

    2013-01-01

    Since the 1993 court case of Daubert v. Merrell Dow Pharmaceuticals, Inc. the subjective nature of toolmark comparison has been questioned by attorneys and law enforcement agencies alike. This has led to an increased drive to establish objective comparison techniques with known error rates, much like those that DNA analysis is able to provide. This push has created research in which the 3-D surface profile of two different marks are characterized and the marks’ cross-sections are run through a comparative statistical algorithm to acquire a value that is intended to indicate the likelihood of a match between the marks. Themore » aforementioned algorithm has been developed and extensively tested through comparison of evenly striated marks made by screwdrivers. However, this algorithm has yet to be applied to quasi-striated marks such as those made by the shear edge of slip-joint pliers. The results of this algorithm’s application to the surface of copper wire will be presented. Objective mark comparison also extends to comparison of toolmarks made by firearms. In an effort to create objective comparisons, microstamping of firing pins and breech faces has been introduced. This process involves placing unique alphanumeric identifiers surrounded by a radial code on the surface of firing pins, which transfer to the cartridge’s primer upon firing. Three different guns equipped with microstamped firing pins were used to fire 3000 cartridges. These cartridges are evaluated based on the clarity of their alphanumeric transfers and the clarity of the radial code surrounding the alphanumerics.« less

  12. Essentials for the Teacher's Toolbox

    ERIC Educational Resources Information Center

    Uhler, Jennifer

    2012-01-01

    Every profession has a set of essential tools for carrying out its work. Airplane mechanics cannot repair engines without sophisticated diagnostics, wrenches, and pliers. Surgeons cannot operate without scalpels and clamps. In contrast, teaching has often been perceived as a profession requiring only students, chalk, and a blackboard in order for…

  13. A Simple Boyle's Law Experiment.

    ERIC Educational Resources Information Center

    Lewis, Don L.

    1997-01-01

    Describes an experiment to demonstrate Boyle's law that provides pressure measurements in a familiar unit (psi) and makes no assumptions concerning atmospheric pressure. Items needed include bathroom scales and a 60-ml syringe, castor oil, disposable 3-ml syringe and needle, modeling clay, pliers, and a wooden block. Commercial devices use a…

  14. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  15. Compare Human-Made Objects with Natural Objects. Grades 3-5.

    ERIC Educational Resources Information Center

    Rushton, Erik; Ryan, Emily; Swift, Charles

    In this activity, students experiment and observe the similarities and differences between human-made objects and nature in small groups. Students compare the function and structure of hollow bones with drinking straws, bird beaks and tool pliers, and bat wings and airplane wings. A classroom discussion can be held to discuss similarities and…

  16. Visual-haptic integration with pliers and tongs: signal “weights” take account of changes in haptic sensitivity caused by different tools

    PubMed Central

    Takahashi, Chie; Watt, Simon J.

    2014-01-01

    When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245

  17. The ARIA guidelines in specialist practice: a nationwide survey.

    PubMed

    Van Hoecke, H; Van Cauwenberge, P; Thas, O; Watelet, J B

    2010-03-01

    In 2001, the ARIA guidelines were published to assist healthcare practitioners in managing allergic rhinitis (AR) according to the best evidence. Very limited information, however, is avail-able on the impact of these guidelines on clinical practice. All Belgian Otorhinolaryngologists were invited to complete a questionnaire, covering demographic and professional characteristics, knowledge, use and perception of the ARIA guidelines and 4 clinical case scenarios of AR. Of the 258 (44%) Belgian Otorhinolaryngologists who participated, almost 90% had ever heard about ARIA and 64% had followed a lecture specifically dedicated to the ARIA guidelines. Furthermore, 62% stated to always or mostly follow the ARIA treatment algorithms in the daily management of AR patients. In the clinical case section, adherence to the ARIA guidelines raised with increased self-reported knowledge and use of the ARIA guidelines and among participants that considered the guidelines more userfriendly. Of the respondents, 51% were considered as good com-pliers. Younger age was a significant predictor for good compliance. More efforts are required to improve the translation of scientific knowledge into clinical practice and to further identify which factors may influence guideline compliance.

  18. Text Summarization Model based on Maximum Coverage Problem and its Variant

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.

  19. Agents Overcoming Resource Independent Scaling Threats (AORIST)

    DTIC Science & Technology

    2004-10-01

    20 Table 8: Tilted Consumer Preferences Experiment (m=8, N=61, G=2, C=60, Mean over 13 experiments...probabilities. Non-uniform consumer preferences create a new potential for sub-optimal system performance and thus require an additional adaptive...distribu- tion of the capacities across the sup- plier population must match the non- uniform consumer preferences . The second plot in Table 8

  20. Understanding the Enemy: The Enduring Value of Technical and Forensic Exploitation

    DTIC Science & Technology

    2014-01-01

    designers, builders , emplacers, triggermen, financiers, component sup- pliers, trainers, planners, and operational leaders who made up the web of actors...help to isolate insurgents from the populace and under- mine their propaganda. In terms of joint functions , TECHINT and WTI support com- mand and...measurable biological and behavioral characteris- tics to uniquely identify people.24 The Air Force is the EA for Digital and Multimedia Forensics

  1. Effect of adhesive remnant removal on enamel topography after bracket debonding

    PubMed Central

    Cardoso, Larissa Adrian Meira; Valdrighi, Heloísa Cristina; Vedovello, Mario; Correr, Américo Bortolazzo

    2014-01-01

    INTRODUCTION: At orthodontic treatment completion, knowledge about the effects of adhesive remnant removal on enamel is paramount. OBJECTIVE: This study aimed at assessing the effect of different adhesive remnant removal methods on enamel topography (ESI) and surface roughness (Ra) after bracket debonding and polishing. METHODS: A total of 50 human premolars were selected and divided into five groups according to the method used for adhesive remnant removal: high speed tungsten carbide bur (TCB), Sof-Lex discs (SL), adhesive removing plier (PL), ultrasound (US) and Fiberglass burs (FB). Metal brackets were bonded with Transbond XT, stored at 37oC for 24 hours before debonding with adhesive removing plier. Subsequently, removal methods were carried out followed by polishing with pumice paste. Qualitative and quantitative analyses were conducted with pre-bonding, post-debonding and post-polishing analyses. Results were submitted to statistical analysis with F test (ANOVA) and Tukey's (Ra) as well as with Kruskal-Wallis and Bonferroni tests (ESI) (P < 0.05). RESULTS: US Ra and ESI were significantly greater than TCB, SL, PL and FB. Polishing minimized Ra and ESI in the SL and FB groups. CONCLUSION: Adhesive remnant removal with SL and FB associated with polishing are recommended due to causing little damage to the enamel. PMID:25628087

  2. Five Bit, Five Gigasample TED Analog-to-Digital Converter Development.

    DTIC Science & Technology

    1981-06-01

    pliers. TRW uses two sources at present: materials grown by Horizontal I Bridgman technique from Crystal Specialties, and Czochralski from MRI. The...the circuit modelling and circuit design tasks. A number of design iterations were required to arrive at a satisfactory design. In or-der to riake...made by modeling the TELD as a voltage-controlled current generator with a built-in time delay between impressed voltage and output current. Based on

  3. Findings from Existing Data on the Department of Defense Industrial Base

    DTIC Science & Technology

    2014-01-01

    the federal government. Leading purchasing textbooks recommend that buyers purchase no more than 30 percent of any sup- plier’s entire capacity, with...percentage of average total revenue FSRS and FPDS data can offer information on supplier dependency (Chart 22). Lead- ing purchasing textbooks (e.g...more contracts become subject to FSRS, may show a different level of sup- plier dependency. Industry Subaward Data 33 24RAND 150,000100,00050,000 90

  4. Capturing User Reading Behaviors for Personalized Document Summarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Jiang, Hao; Lau, Francis

    2011-01-01

    We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.

  5. Chinese Text Summarization Algorithm Based on Word2vec

    NASA Astrophysics Data System (ADS)

    Chengzhang, Xu; Dan, Liu

    2018-02-01

    In order to extract some sentences that can cover the topic of a Chinese article, a Chinese text summarization algorithm based on Word2vec is used in this paper. Words in an article are represented as vectors trained by Word2vec, the weight of each word, the sentence vector and the weight of each sentence are calculated by combining word-sentence relationship with graph-based ranking model. Finally the summary is generated on the basis of the final sentence vector and the final weight of the sentence. The experimental results on real datasets show that the proposed algorithm has a better summarization quality compared with TF-IDF and TextRank.

  6. Determination of Atomic and Molecular Excited-State Lifetimes Using an Opto-electronic Cross-Correlation Method.

    DTIC Science & Technology

    1981-10-07

    new instrument (cf. Fig. 1) is simply a four - quadrant ring-diode multi- 5 plier (Fig. 2). The reference frequency (RF) and local oscillator (LO) inputs...movement, and scan speed of the corner-cube. Other Components. A rotating-sector chopper modulates the laser pulse train at a frequency of approximately 50...the cross-correlation experiment. In this application, the detection bandpass is simply displaced from DC to the chopper frequency; problems arising

  7. Effects of Human Self-Assessment Responding on Learning

    DTIC Science & Technology

    1980-08-01

    suppose that the location ’_sf the feedback loop ( extrinsic , intrinsic , or central) as well as the locus of the standard may depend upon the hierarchical...execute--or have already executed but have not yet received any extrinsic feedback or knowledge of results about its correctness. :: Method Subjects...these orders vere recycled through until the subjects had learned 4 the names. If the names of the pliers had not been learned by the end of the 40th

  8. The British in Kenya (1952-1960): Analysis of a Successful Counterinsurgency Campaign

    DTIC Science & Technology

    2005-06-01

    East Africa to seek their riches in cattle , coffee, mining, and selling safaris to tourists. While the other colonial powers set their sights on...violence on a large scale by early 1952. Cattle on white settlements were being destroyed and standing crops and haystacks set on fire, particularly...Guards used pliers to castrate Mau Mau prisoners. Whatever the method and level of brutality, by the latter half of the 1950s most Kikuyu had turned

  9. A Survey on Sentiment Classification in Face Recognition

    NASA Astrophysics Data System (ADS)

    Qian, Jingyu

    2018-01-01

    Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.

  10. An overview of smart grid routing algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  11. Development of a mobile toolmark characterization/comparison system [Development of a mobile, automated toolmark characterization/comparison system

    DOE PAGES

    Chumbley, Scott; Zhang, Song; Morris, Max; ...

    2016-11-16

    Since the development of the striagraph, various attempts have been made to enhance forensic investigation through the use of measuring and imaging equipment. This study describes the development of a prototype system employing an easy-to-use software interface designed to provide forensic examiners with the ability to measure topography of a toolmarked surface and then conduct various comparisons using a statistical algorithm. Acquisition of the data is carried out using a portable 3D optical profilometer, and comparison of the resulting data files is made using software named “MANTIS” (Mark and Tool Inspection Suite). The system has been tested on laboratory-produced markingsmore » that include fully striated marks (e.g., screwdriver markings), quasistriated markings produced by shear-cut pliers, impression marks left by chisels, rifling marks on bullets, and cut marks produced by knives. Using the system, an examiner has the potential to (i) visually compare two toolmarked surfaces in a manner similar to a comparison microscope and (ii) use the quantitative information embedded within the acquired data to obtain an objective statistical comparison of the data files. Finally, this study shows that, based on the results from laboratory samples, the system has great potential for aiding examiners in conducting comparisons of toolmarks.« less

  12. Development of a mobile toolmark characterization/comparison system [Development of a mobile, automated toolmark characterization/comparison system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chumbley, Scott; Zhang, Song; Morris, Max

    Since the development of the striagraph, various attempts have been made to enhance forensic investigation through the use of measuring and imaging equipment. This study describes the development of a prototype system employing an easy-to-use software interface designed to provide forensic examiners with the ability to measure topography of a toolmarked surface and then conduct various comparisons using a statistical algorithm. Acquisition of the data is carried out using a portable 3D optical profilometer, and comparison of the resulting data files is made using software named “MANTIS” (Mark and Tool Inspection Suite). The system has been tested on laboratory-produced markingsmore » that include fully striated marks (e.g., screwdriver markings), quasistriated markings produced by shear-cut pliers, impression marks left by chisels, rifling marks on bullets, and cut marks produced by knives. Using the system, an examiner has the potential to (i) visually compare two toolmarked surfaces in a manner similar to a comparison microscope and (ii) use the quantitative information embedded within the acquired data to obtain an objective statistical comparison of the data files. Finally, this study shows that, based on the results from laboratory samples, the system has great potential for aiding examiners in conducting comparisons of toolmarks.« less

  13. Performance Objectives

    DTIC Science & Technology

    1978-12-01

    students (Olson, 1971; Yelo- 6, Schmidt , 1971; Stedman, i970) adds nothing to our knowledge; thec- studies, too, are Plagued I...in a beha.ioral objective for a mathmatics class wtld be "... using only a calculator ...’ or 0... using only the protractor...’ The second are the...pliers, screwdriver and hammer i.1 0.1 3. To write x & y from memory 1.1 0.1 4. To lever press either x or y within two seconds 1.1 0.1 5. To point

  14. Survey of PRT Vehicle Management Algorithms

    DOT National Transportation Integrated Search

    1974-01-01

    The document summarizes the results of a literature survey of state of the art vehicle management algorithms applicable to Personal Rapid Transit Systems(PRT). The surveyed vehicle management algorithms are organized into a set of five major componen...

  15. STS-57 inflight maintenance (IFM) tool tray at Boeing FEPF bench review

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-57 inflight maintenance (IFM) tool tray is displayed on a table top during the bench review at Boeing's Flight Equipment Processing Facility (FEPF) located near JSC. The tool tray will be located on Endeavour's, Orbiter Vehicle (OV) 105's, middeck in forward locker MF57K and includes pinch bar, deadblow hammer, punch, inspection mirror, speed handle assembly, robbins wrench, adjustable wrench, vise grips, connector pliers, ACCU bypass connector, connector strap wrench, locker tool, and mechanical fingers. Photo taken by NASA JSC contract photographer Benny Benavides.

  16. Abstracts: 1984 AFOSR/ONR Contractors Meeting on Airbreathing Combustion Research Held on June 20-21, 1984, in Pittsburgh, Pennsylvania

    DTIC Science & Technology

    1984-06-21

    H.L. Beach, NASA Langle Research Center Topic: IGNITION/COMBUSTION ENHANCEMENT 10:30 - 11:00 M. Lavid. ML Energia , Inc. 11:00 - 11:30 W. Braun and...F49620-83-C-0133) Principal Investigator: Moshe Lavid ML ENERGIA , Inc. P.O. Box 1468 Princeton, NJ 08542 SUMMARY/OVERVIEW: The radiative concept to...an indine resonance lamp and a solar -blind PhotomultiPlier. 2 nder- the%,& Cir esm.tanr&e the temPerAtre mod1Jlation CArn he r.qlrujated rom a

  17. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    PubMed Central

    Ramanujam, Nedunchelian; Kaliappan, Manivannan

    2016-01-01

    Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971

  18. Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data

    NASA Technical Reports Server (NTRS)

    Zion, P. M.

    1983-01-01

    The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.

  19. Modeling of biological intelligence for SCM system optimization.

    PubMed

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  20. Modeling of Biological Intelligence for SCM System Optimization

    PubMed Central

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  1. Research On Vehicle-Based Driver Status/Performance Monitoring; Development, Validation, And Refinement Of Algorithms For Detection Of Driver Drowsiness, Final Report

    DOT National Transportation Integrated Search

    1994-12-01

    THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...

  2. Summarizing Simulation Results using Causally-relevant States

    PubMed Central

    Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth

    2016-01-01

    As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620

  3. Medial Osteoectomy as a Routine Procedure in Rhinoplasty: Six-Year Experience with an Innovative Technique.

    PubMed

    Lykoudis, Efstathios G; Peristeri, Dimitra V; Lykoudis, Georgios E; Oikonomou, Georgios A

    2018-02-01

    Medial osteotomy is an integral part of most rhinoplasty procedures, and when improperly performed, it is associated with postoperative complications and nasal contour deformities. In this article, we present a minimally traumatic and easy-to-perform medial osteoectomy technique with a pair of pliers, as a routine procedure, instead of the traditional medial osteotomy with osteotome and hammer. We report our experience with the use of the technique in a series of rhinoplasty procedures and review in brief the existing literature. One hundred and thirty-five patients underwent rhinoplasty operations to correct aesthetic nose deformities, with the use of the suggested surgical technique. Two different types of medial osteoectomy, performed with the pliers, were used: Type I for dorsal nasal hump reduction and slight narrowing of the nose and type II for the management of a wide nasal dorsum along with or without hump removal. Postoperative results were favorable, by both clinical examination and comparison of preoperative and postoperative photographs, in 98.5% of patients. Only two patients with wide nasal dorsums had inadequate narrowing of their broad nose and underwent successful revision surgery. The suggested technique is easy to perform, has a short learning curve, provides high accuracy over the location and amount of the nasal bone to be removed, but inflicts minimal trauma. As a result of the aforementioned advantages, the risk of postoperative complications is low, and most importantly, reliable, consistent, and aesthetically pleasing results are easily ensured. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  4. Automatic and user-centric approaches to video summary evaluation

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.; Bentley, Frank

    2007-01-01

    Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.

  5. Using clustering and a modified classification algorithm for automatic text summarization

    NASA Astrophysics Data System (ADS)

    Aries, Abdelkrime; Oufaida, Houda; Nouali, Omar

    2013-01-01

    In this paper we describe a modified classification method destined for extractive summarization purpose. The classification in this method doesn't need a learning corpus; it uses the input text to do that. First, we cluster the document sentences to exploit the diversity of topics, then we use a learning algorithm (here we used Naive Bayes) on each cluster considering it as a class. After obtaining the classification model, we calculate the score of a sentence in each class, using a scoring model derived from classification algorithm. These scores are used, then, to reorder the sentences and extract the first ones as the output summary. We conducted some experiments using a corpus of scientific papers, and we have compared our results to another summarization system called UNIS.1 Also, we experiment the impact of clustering threshold tuning, on the resulted summary, as well as the impact of adding more features to the classifier. We found that this method is interesting, and gives good performance, and the addition of new features (which is simple using this method) can improve summary's accuracy.

  6. Effects of various debonding and adhesive clearance methods on enamel surface: an in vitro study.

    PubMed

    Fan, Xiao-Chuan; Chen, Li; Huang, Xiao-Feng

    2017-02-27

    The purpose of this study was to evaluate orthodontic debonding methods by comparing the surface roughness and enamel morphology of teeth after applying two different debonding methods and three different polishing techniques. Forty eight human maxillary premolars, extracted for orthodontic reasons, were randomly divided into three groups. Brackets were bonded to teeth with RMGIC (Fuji Ortho LC, GC, Tokyo, Japan) (two groups, n = 18 each) after acid etching (30s), light cured for 40 s, exposed to thermocycling, then underwent 2 different bracket debonding methods: debonding pliers (Shinye, Hangzhou, China) or enamel chisel (Jinzhong, Shanghai, China); the third group (n = 12) comprised of untreated controls, with normal enamel surface roughness. In each debonded group, three cleanup techniques (n = 6 each) were tested, including (I) diamond bur (TC11EF, MANI, Tochigi, Japan) and One-Gloss (Midi, Shofu, Kyoto, Japan), (II) a Super-Snap disk (Shofu, Kyoto, Japan), and (III) One-Gloss polisher. The debonding methods were compared using the modified adhesive remnant index (ARI, 1-5). Cleanup efficiencies were assessed by recording operating times. Enamel surfaces were qualitatively and quantitatively evaluated with scanning electron microscopy (SEM) and surface roughness tester, respectively. Two surface roughness variables were evaluated: Ra (average roughness) and Rz (10-point height of irregularities). The ARI scores of debonded teeth were similar with debonding pliers and enamel chisel (Chi-square = 2.19, P > 0.05). There were significant differences between mean operating time in each group (F = 52.615, P < 0.01). The diamond bur + One-Gloss took the shortest operating time (37.92 ± 3.82 s), followed by the Super-Snap disk (56.67 ± 7.52 s), and the One-Gloss polisher (63.50 ± 6.99 s). SEM appearance provided by the One-Gloss polisher was the closest to the intact enamel surface, and surface roughness (Ra: 0.082 ± 0.046 μm; Rz: 0.499 ± 0.200 μm) was closest to the original enamel (Ra: 0.073 ± 0.048 μm; Rz: 0.438 ± 0.213 μm); the next best was the Super-Snap disk (Ra: 0.141 ± 0.073 μm; Rz: 1.156 ± 0.755 μm); then, the diamond bur + One-Gloss (Ra: 0.443 ± 0.172 μm; Rz: 2.202 ± 0.791 μm). Debonding pliers were safer than enamel chisels for removing brackets. Cleanup with One-Gloss polisher provided enamel surfaces closest to the intact enamel, but took more time, and Super-Snap disks provided acceptable enamel surfaces and efficiencies. The diamond bur was not suitable for removing adhesive remnant.

  7. m-BIRCH: an online clustering approach for computer vision applications

    NASA Astrophysics Data System (ADS)

    Madan, Siddharth K.; Dana, Kristin J.

    2015-03-01

    We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.

  8. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  9. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  10. STS-57 inflight maintenance (IFM) tool tray at Boeing FEPF bench review

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-57 inflight maintenance (IFM) tool tray is displayed on a table top during the bench review at Boeing's Flight Equipment Processing Facility (FEPF) located near JSC. The tool tray will be located on Endeavour's, Orbiter Vehicle (OV) 105's, middeck in forward locker MF57K and includes modified forceps, L-shaped hex wrenches, jeweler screwdrivers, adjustable wrench, bone saw, combination wrenches, override devices, switch guards, tape measure, torque driver, short screwdriver, long screwdriver, phillips screwdrivers, ratchet wrench, needlenose pliers, torque tips, adapter, universal joint, deepwell sockets, sockets, driver handle, seat adjustment tool, extensions, torque wrench, allen head drivers, hacksaw, and blades. Photo taken by NASA JSC contract photographer Benny Benavides.

  11. Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.

    1999-07-28

    As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less

  12. Design requirements and development of an airborne descent path definition algorithm for time navigation

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  13. Informationally Efficient Multi-User Communication

    DTIC Science & Technology

    2010-01-01

    DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm

  14. Automatic summarization of changes in biological image sequences using algorithmic information theory.

    PubMed

    Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath

    2009-08-01

    An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.

  15. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  16. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  17. Research status of multi - robot systems task allocation and uncertainty treatment

    NASA Astrophysics Data System (ADS)

    Li, Dahui; Fan, Qi; Dai, Xuefeng

    2017-08-01

    The multi-robot coordination algorithm has become a hot research topic in the field of robotics in recent years. It has a wide range of applications and good application prospects. This paper analyzes and summarizes the current research status of multi-robot coordination algorithms at home and abroad. From task allocation and dealing with uncertainty, this paper discusses the multi-robot coordination algorithm and presents the advantages and disadvantages of each method commonly used.

  18. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  19. Substructure System Identification for Finite Element Model Updating

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Blades, Eric L.

    1997-01-01

    This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.

  20. A spline-based parameter and state estimation technique for static models of elastic surfaces

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Daniel, P. L.; Armstrong, E. S.

    1983-01-01

    Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.

  1. Does technology acceleration equate to mask cost acceleration?

    NASA Astrophysics Data System (ADS)

    Trybula, Walter J.; Grenon, Brian J.

    2003-06-01

    The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor sup-plier community and the manufacturers. INTERNATIONAL SEMATECH has revaluated the projected cost of advanced technology masks. Building on the methodology developed in 1996 for mask costs, this work provided a critical review of mask yields and factors relating to the manufacture of photolithography masks. The impact of the yields provided insight into the learning curve for leading edge mask manufac-turing. The projected mask set cost was surprising, and the ability to provide first and second year cost estimates provided additional information on technology introduction. From this information, the impact of technology acceleration can be added to the projected yields to evaluate the impact on mask costs.

  2. Some observations on glass-knife making.

    PubMed

    Ward, R T

    1977-11-01

    The yield of usable knife edge per knife (for thin sectioning) was markedly increased when glass knives were made at an included angle of 55 degrees rather than the customary 45 degrees. A large number of measurements of edge check marks made with a routine light scattering method as well as observations made on a smaller number of test sections with the electron microscope indicated the superiority of 55 degrees knives. Knives were made with both taped pliers and an LKB Knifemaker. Knives were graded by methods easily applied in any biological electron microscope laboratory. Depending on the mode of fracture, the yield of knives having more than 33% of their edges free of check marks was 30 to 100 times greater at 55 degrees than 45 degrees.

  3. 49. View looking northwest, toward U.S. 24. Removal of west ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    49. View looking northwest, toward U.S. 24. Removal of west forebay decking. The technician in the right rear is operating a device used for pulling the spikes - an auto jack, chain and a pair of adapted 'vice grip' pliers welded to the chain. This photograph shows the 1 inch boards (in center right) which were laid atop the foundation timbers to locate the cribs for the workers. In addition, the flooring in the first crib is visible (center) This was the only crib which was floored, the other cribs were built directly above foundation timbers (center right). - Wabash & Erie Canal, Lock No. 2, 8 miles east of Fort Wayne, adjacent to U.S. Route 24, New Haven, Allen County, IN

  4. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  5. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  6. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    PubMed

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  7. A review of data fusion techniques.

    PubMed

    Castanedo, Federico

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.

  8. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  9. Cat swarm optimization based evolutionary framework for multi document summarization

    NASA Astrophysics Data System (ADS)

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  10. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  11. Quick release latch for reactor scram

    DOEpatents

    Johnson, Melvin L.; Shawver, Bruce M.

    1976-01-01

    A simple, reliable, and fast-acting means for releasing a control element and allowing it to be inserted rapidly into the core region of a nuclear reactor for scram purposes. A latch mechanism grips a coupling head on a nuclear control element to connect the control element to the control drive assembly. The latch mechanism is closed by tensioning a cable or rod with an actuator. The control element is released by de-energizing the actuator, providing fail-safe, rapid release of the control element to effect reactor shutdown. A sensing rod provides indication that the control element is properly positioned in the latch. Two embodiments are illustrated, one involving a collet-type latch mechanism, the other a pliers-type latch mechanism with the actuator located inside the reactor vessel.

  12. Quick release latch for reactor scram

    DOEpatents

    Johnson, M.L.; Shawver, B.M.

    1975-09-16

    A simple, reliable, and fast-acting means for releasing a control element and allowing it to be inserted rapidly into the core region of a nuclear reactor for scram purposes is described. A latch mechanism grips a coupling head on a nuclear control element to connect the control element to the control drive assembly. The latch mechanism is closed by tensioning a cable or rod with an actuator. The control element is released by de-energizing the actuator, providing fail-safe, rapid release of the control element to effect reactor shutdown. A sensing rod provides indication that the control element is properly positioned in the latch. Two embodiments are illustrated, one involving a collet- type latch mechanism, the other a pliers-type latch mechanism with the actuator located inside the reactor vessel. (auth)

  13. Figure summarizer browser extensions for PubMed Central

    PubMed Central

    Agarwal, Shashank; Yu, Hong

    2011-01-01

    Summary: Figures in biomedical articles present visual evidence for research facts and help readers understand the article better. However, when figures are taken out of context, it is difficult to understand their content. We developed a summarization algorithm to summarize the content of figures and used it in our figure search engine (http://figuresearch.askhermes.org/). In this article, we report on the development of web browser extensions for Mozilla Firefox, Google Chrome and Apple Safari to display summaries for figures in PubMed Central and NCBI Images. Availability: The extensions can be downloaded from http://figuresearch.askhermes.org/articlesearch/extensions.php. Contact: agarwal@uwm.edu PMID:21493658

  14. A Review of Data Fusion Techniques

    PubMed Central

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502

  15. Gamma Ray Observatory (GRO) OBC attitude error analysis

    NASA Technical Reports Server (NTRS)

    Harman, R. R.

    1990-01-01

    This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

  16. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  17. Filetype Identification Using Long, Summarized N-Grams

    DTIC Science & Technology

    2011-03-01

    compressed or encrypted data . If the algorithm used to compress or encrypt the data can be determined, then it is frequently possible to uncom- press...fragments. His implementation utilized the bzip2 library to compress the file fragments. The bzip2 library is based off the Lempel - Ziv -Markov chain... algorithm that uses a dictionary compression scheme to remove repeating data patterns within a set of data . The removed patterns are listed within the

  18. Characterizing X-ray Attenuation of Containerized Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birrer, N.; Divin, C.; Glenn, S.

    X-ray inspection systems can be used to detect radiological and nuclear threats in imported cargo. In order to better understand performance of these systems, the attenuation characteristics of imported cargo need to be determined. This project focused on developing image processing algorithms for segmenting cargo and using x-ray attenuation to quantify equivalent steel thickness to determine cargo density. These algorithms were applied to over 450 cargo radiographs. The results are summarized in this report.

  19. Climatological Characterization of Three-Dimensional Storm Structure from Operational Radar and Rain Gauge Data.

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Houze, Robert A., Jr.; Yuter, Sandra E.

    1995-09-01

    Three algorithms extract information on precipitation type, structure, and amount from operational radar and rain gauge data. Tests on one month of data from one site show that the algorithms perform accurately and provide products that characterize the essential features of the precipitation climatology. Input to the algorithms are the operationally executed volume scans of a radar and the data from a surrounding rain gauge network. The algorithms separate the radar echoes into convective and stratiform regions, statistically summarize the vertical structure of the radar echoes, and determine precipitation rates and amounts on high spatial resolution.The convective and stratiform regions are separated on the basis of the intensity and sharpness of the peaks of echo intensity. The peaks indicate the centers of the convective region. Precipitation not identified as convective is stratiform. This method avoids the problem of underestimating the stratiform precipitation. The separation criteria are applied in exactly the same way throughout the observational domain and the product generated by the algorithm can be compared directly to model output. An independent test of the algorithm on data for which high-resolution dual-Doppler observations are available shows that the convective stratiform separation algorithm is consistent with the physical definitions of convective and stratiform precipitation.The vertical structure algorithm presents the frequency distribution of radar reflectivity as a function of height and thus summarizes in a single plot the vertical structure of all the radar echoes observed during a month (or any other time period). Separate plots reveal the essential differences in structure between the convective and stratiform echoes.Tests yield similar results (within less than 10%) for monthly rain statistics regardless of the technique used for estimating the precipitation, as long as the radar reflectivity values are adjusted to agree with monthly rain gauge data. It makes little difference whether the adjustment is by monthly mean rates or percentiles. Further tests show that 1-h sampling is sufficient to obtain an accurate estimate of monthly rain statistics.

  20. Swarm Intelligence in Text Document Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xiaohui; Potok, Thomas E

    2008-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior. The research field that attempts to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insect colonies is called Swarm Intelligence. Compared to the traditional algorithms, the swarm algorithms are usually flexible, robust, decentralized and self-organized. These characters make the swarm algorithms suitable for solving complex problems, such as document collection clustering. The major challenge of today's information society is being overwhelmed with information on any topic they are searching for. Fast and high-quality document clustering algorithms play an important role inmore » helping users to effectively navigate, summarize, and organize the overwhelmed information. In this chapter, we introduce three nature inspired swarm intelligence clustering approaches for document clustering analysis. These clustering algorithms use stochastic and heuristic principles discovered from observing bird flocks, fish schools and ant food forage.« less

  1. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  2. Four-probe measurements with a three-probe scanning tunneling microscope.

    PubMed

    Salomons, Mark; Martins, Bruno V C; Zikovsky, Janik; Wolkow, Robert A

    2014-04-01

    We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position by imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.

  3. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    PubMed Central

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  4. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  5. Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1

    DTIC Science & Technology

    2013-05-14

    include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric

  6. Turning Search into Knowledge Management.

    ERIC Educational Resources Information Center

    Kaufman, David

    2002-01-01

    Discussion of knowledge management for electronic data focuses on creating a high quality similarity ranking algorithm. Topics include similarity ranking and unstructured data management; searching, categorization, and summarization of documents; query evaluation; considering sentences in addition to keywords; and vector models. (LRW)

  7. Aerosol Climate Time Series in ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2016-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  8. Viking lander camera radiometry calibration report, volume 2

    NASA Technical Reports Server (NTRS)

    Wolf, M. R.; Atwood, D. L.; Morrill, M. E.

    1977-01-01

    The requirements the performance validation, and interfaces for the RADCAM program, to convert Viking lander camera image data to radiometric units were established. A proposed algorithm is described, and an appendix summarizing the planned reduction of camera test data was included.

  9. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  10. Wiener Chaos and Nonlinear Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lototsky, S.V.

    2006-11-15

    The paper discusses two algorithms for solving the Zakai equation in the time-homogeneous diffusion filtering model with possible correlation between the state process and the observation noise. Both algorithms rely on the Cameron-Martin version of the Wiener chaos expansion, so that the approximate filter is a finite linear combination of the chaos elements generated by the observation process. The coefficients in the expansion depend only on the deterministic dynamics of the state and observation processes. For real-time applications, computing the coefficients in advance improves the performance of the algorithms in comparison with most other existing methods of nonlinear filtering. Themore » paper summarizes the main existing results about these Wiener chaos algorithms and resolves some open questions concerning the convergence of the algorithms in the noise-correlated setting. The presentation includes the necessary background on the Wiener chaos and optimal nonlinear filtering.« less

  11. Four-probe measurements with a three-probe scanning tunneling microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salomons, Mark; Martins, Bruno V. C.; Zikovsky, Janik

    2014-04-15

    We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position bymore » imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.« less

  12. Primary Vaginal Calculus in a Woman with Disability: Case Report and Literature Review.

    PubMed

    Castellan, Pietro; Nicolai, Michele; De Francesco, Piergustavo; Di Tizio, Luciano; Castellucci, Roberto; Bada, Maida; Marchioni, Michele; Cindolo, Luca; Schips, Luigi

    2017-01-01

    Background: Vaginal stones are rare and often unknown entities. Most urologists may never see a case in their careers. Case Presentation: We present the case of a 34-year-old bedridden Caucasian woman with mental and physical disabilities who presented with a large primary vaginal calculus, which, surprisingly, had remained undiagnosed until the patient suffered a right renal colic caused by a ureteral stone. The vagina was completely filled and a digital examination was not possible. For this reason, the stone was removed using surgical pliers with some maneuvering. A vesicovaginal fistula was excluded, as well as foreign bodies or other nidi of infection. After, urethral lithotripsy was performed as planned. The postoperative course and follow-up were uneventful. Conclusion: Although vaginal calculi are extremely rare in literature, their differential diagnosis should be considered in women with incontinence and associated disabilities, paraplegia, or prolonged immobilization in recumbent position.

  13. Cooperative macromolecular device revealed by meta-analysis of static and time-resolved structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Zhong; Šrajer, Vukica; Knapp, James E.

    2013-04-08

    Here we present a meta-analysis of a large collection of static structures of a protein in the Protein Data Bank in order to extract the progression of structural events during protein function. We apply this strategy to the homodimeric hemoglobin HbI from Scapharca inaequivalvis. We derive a simple dynamic model describing how binding of the first ligand in one of the two chemically identical subunits facilitates a second binding event in the other partner subunit. The results of our ultrafast time-resolved crystallographic studies support this model. We demonstrate that HbI functions like a homodimeric mechanical device, such as pliers ormore » scissors. Ligand-induced motion originating in one subunit is transmitted to the other via conserved pivot points, where the E and F' helices from two partner subunits are 'bolted' together to form a stable dimer interface permitting slight relative rotation but preventing sliding.« less

  14. Nanomechanical DNA origami 'single-molecule beacons' directly imaged by atomic force microscopy

    PubMed Central

    Kuzuya, Akinori; Sakai, Yusuke; Yamazaki, Takahiro; Xu, Yan; Komiyama, Makoto

    2011-01-01

    DNA origami involves the folding of long single-stranded DNA into designed structures with the aid of short staple strands; such structures may enable the development of useful nanomechanical DNA devices. Here we develop versatile sensing systems for a variety of chemical and biological targets at molecular resolution. We have designed functional nanomechanical DNA origami devices that can be used as 'single-molecule beacons', and function as pinching devices. Using 'DNA origami pliers' and 'DNA origami forceps', which consist of two levers ~170 nm long connected at a fulcrum, various single-molecule inorganic and organic targets ranging from metal ions to proteins can be visually detected using atomic force microscopy by a shape transition of the origami devices. Any detection mechanism suitable for the target of interest, pinching, zipping or unzipping, can be chosen and used orthogonally with differently shaped origami devices in the same mixture using a single platform. PMID:21863016

  15. Bending-induced folding, an actuation mechanism for plant reconfiguration.

    NASA Astrophysics Data System (ADS)

    Terwagne, Denis; Segers, JéRéMy; trioS. lab-Soft Structures; Surfaces Lab Team

    Inspired by the sophisticated mechanism of the opening and closing of the ice seed plant valves (Aizoaceae), we present a simple model experiment of this mechanism based on an origami folding. By imposing a curvature to one of the plate connected to a fold designed along a curved path, we actuate its opening and closing. The imposed curvature induces inner mechanical constraints that give us a precise control of the deflection angle, which ultimately leads the fold to close completely. In this talk, we will present an analysis and characterization of this mechanism as a function of the geometrical and mechanical parameters of the system. From these insights, we will show how to build origami pliers with tunable mechanical properties. Possible out comings that might arise in various fields, ranging from deployable engineered structure to soft robotics and medical devices, are discussed. DT and JS thank the Belgian national science foundation F.R.S-FNRS for funding.

  16. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  17. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  18. TRL-6 for JWST wavefront sensing and control

    NASA Astrophysics Data System (ADS)

    Feinberg, Lee D.; Dean, Bruce H.; Aronstein, David L.; Bowers, Charles W.; Hayden, William; Lyon, Richard G.; Shiri, Ron; Smith, J. Scott; Acton, D. Scott; Carey, Larkin; Contos, Adam; Sabatke, Erin; Schwenker, John; Shields, Duncan; Towell, Tim; Shi, Fang; Meza, Luis

    2007-09-01

    NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed.

  19. TRL-6 for JWST Wavefront Sensing and Control

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Dean, Bruce; Smith, Scott; Aronstein, David; Shiri, Ron; Lyon, Rick; Hayden, Bill; Bowers, Chuck; Acton, D. Scott; Shields, Duncan; hide

    2007-01-01

    NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed

  20. Cosmology with the cosmic web

    NASA Astrophysics Data System (ADS)

    Forero-Romero, J. E.

    2017-07-01

    This talk summarizes different algorithms that can be used to trace the cosmic web both in simulations and observations. We present different applications in galaxy formation and cosmology. To finalize, we show how the Dark Energy Spectroscopic Instrument (DESI) could be a good place to apply these techniques.

  1. Analyzing a 35-Year Hourly Data Record: Why So Difficult?

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    At the Goddard Distributed Active Archive Center, we have recently added a 35-Year record of output data from the North American Land Assimilation System (NLDAS) to the Giovanni web-based analysis and visualization tool. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure) offers a variety of data summarization and visualization to users that operate at the data center, obviating the need for users to download and read the data themselves for exploratory data analysis. However, the NLDAS data has proven surprisingly resistant to application of the summarization algorithms. Algorithms that were perfectly happy analyzing 15 years of daily satellite data encountered limitations both at the algorithm and system level for 35 years of hourly data. Failures arose, sometimes unexpectedly, from command line overflows, memory overflows, internal buffer overflows, and time-outs, among others. These serve as an early warning sign for the problems likely to be encountered by the general user community as they try to scale up to Big Data analytics. Indeed, it is likely that more users will seek to perform remote web-based analysis precisely to avoid the issues, or the need to reprogram around them. We will discuss approaches to mitigating the limitations and the implications for data systems serving the user communities that try to scale up their current techniques to analyze Big Data.

  2. caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts

    PubMed Central

    2011-01-01

    Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data. PMID:21957981

  3. Innovative signal processing for Johnson Noise thermometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezell, N. Dianne Bull; Britton, Jr, Charles L.; Roberts, Michael

    This report summarizes the newly developed algorithm that subtracted the Electromagnetic Interference (EMI). The EMI performance is very important to this measurement because any interference in the form on pickup from external signal sources from such as fluorescent lighting ballasts, motors, etc. can skew the measurement. Two methods of removing EMI were developed and tested at various locations. This report also summarizes the testing performed at different facilities outside Oak Ridge National Laboratory using both EMI removal techniques. The first EMI removal technique reviewed in previous milestone reports and therefore this report will detail the second method.

  4. Review assessment support in Open Journal System using TextRank

    NASA Astrophysics Data System (ADS)

    Manalu, S. R.; Willy; Sundjaja, A. M.; Noerlina

    2017-01-01

    In this paper, a review assessment support in Open Journal System (OJS) using TextRank is proposed. OJS is an open-source journal management platform that provides a streamlined journal publishing workflow. TextRank is an unsupervised, graph-based ranking model commonly used as extractive auto summarization of text documents. This study applies the TextRank algorithm to summarize 50 article reviews from an OJS-based international journal. The resulting summaries are formed using the most representative sentences extracted from the reviews. The summaries are then used to help OJS editors in assessing a review’s quality.

  5. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  6. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  7. Electron and photon identification in the D0 experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abazov, V. M.; Abbott, B.; Acharya, B. S.

    2014-06-01

    The electron and photon reconstruction and identification algorithms used by the D0 Collaboration at the Fermilab Tevatron collider are described. The determination of the electron energy scale and resolution is presented. Studies of the performance of the electron and photon reconstruction and identification are summarized.

  8. JTSA: an open source framework for time series abstractions.

    PubMed

    Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana

    2015-10-01

    The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Aerosol Climate Time Series Evaluation In ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, T.; de Leeuw, G.; Pinnock, S.

    2015-12-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. By the end of 2015 full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which are also validated. The paper will summarize and discuss the results of major reprocessing and validation conducted in 2015. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  10. Parameter identification for nonlinear aerodynamic systems

    NASA Technical Reports Server (NTRS)

    Pearson, Allan E.

    1992-01-01

    Continuing work on frequency analysis for transfer function identification is discussed. A new study was initiated into a 'weighted' least squares algorithm within the context of the Fourier modulating function approach. The first phase of applying these techniques to the F-18 flight data is nearing completion, and these results are summarized.

  11. A stochastic approach to online vehicle state and parameter estimation, with application to inertia estimation for rollover prevention and battery charge/health estimation.

    DOT National Transportation Integrated Search

    2013-08-01

    This report summarizes research conducted at Penn State, Virginia Tech, and West Virginia University on the development of algorithms based on the generalized polynomial chaos (gpc) expansion for the online estimation of automotive and transportation...

  12. On the Scientific Foundations of Level 2 Fusion

    DTIC Science & Technology

    2004-03-01

    Development of Decision Aids”, CMIF Report 2-99, Feb 1999 KN2-49 Theories of Groups, Teams, Coalitions, etc -Adaptive Behaviors- • There is a huge literature...Investigations of Trust-related System Vulnerabilities in Aided, Adversarial Decision Making”, CMIF Report, Jan 2000 KN2-53 Summarizing Algorithm re Thing

  13. Analyzing Fourier Transforms for NASA DFRC's Fiber Optic Strain Sensing System

    NASA Technical Reports Server (NTRS)

    Fiechtner, Kaitlyn Leann

    2010-01-01

    This document provides a basic overview of the fiber optic technology used for sensing stress, strain, and temperature. Also, the document summarizes the research concerning speed and accuracy of the possible mathematical algorithms that can be used for NASA DFRC's Fiber Optic Strain Sensing (FOSS) system.

  14. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  15. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  16. Earth resources data analysis program, phase 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The efforts and findings of the Earth Resources Data Analysis Program are summarized. Results of a detailed study of the needs of EOD with respect to an applications development system (ADS) for the analysis of remotely sensed data, including an evaluation of four existing systems with respect to these needs are described. Recommendations as to possible courses for EOD to follow to obtain a viable ADS are presented. Algorithmic development comprised of several subtasks is discussed. These subtasks include the following: (1) two algorithms for multivariate density estimation; (2) a data smoothing algorithm; (3) a method for optimally estimating prior probabilities of unclassified data; and (4) further applications of the modified Cholesky decomposition in various calculations. Little effort was expended on task 3, however, two reports were reviewed.

  17. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    PubMed

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Graph-based biomedical text summarization: An itemset mining and sentence clustering approach.

    PubMed

    Nasr Azadani, Mozhgan; Ghadiri, Nasser; Davoodijam, Ensieh

    2018-06-12

    Automatic text summarization offers an efficient solution to access the ever-growing amounts of both scientific and clinical literature in the biomedical domain by summarizing the source documents while maintaining their most informative contents. In this paper, we propose a novel graph-based summarization method that takes advantage of the domain-specific knowledge and a well-established data mining technique called frequent itemset mining. Our summarizer exploits the Unified Medical Language System (UMLS) to construct a concept-based model of the source document and mapping the document to the concepts. Then, it discovers frequent itemsets to take the correlations among multiple concepts into account. The method uses these correlations to propose a similarity function based on which a represented graph is constructed. The summarizer then employs a minimum spanning tree based clustering algorithm to discover various subthemes of the document. Eventually, it generates the final summary by selecting the most informative and relative sentences from all subthemes within the text. We perform an automatic evaluation over a large number of summaries using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. The results demonstrate that the proposed summarization system outperforms various baselines and benchmark approaches. The carried out research suggests that the incorporation of domain-specific knowledge and frequent itemset mining equips the summarization system in a better way to address the informativeness measurement of the sentences. Moreover, clustering the graph nodes (sentences) can enable the summarizer to target different main subthemes of a source document efficiently. The evaluation results show that the proposed approach can significantly improve the performance of the summarization systems in the biomedical domain. Copyright © 2018. Published by Elsevier Inc.

  19. Eosinophilic pustular folliculitis: A proposal of diagnostic and therapeutic algorithms.

    PubMed

    Nomura, Takashi; Katoh, Mayumi; Yamamoto, Yosuke; Miyachi, Yoshiki; Kabashima, Kenji

    2016-11-01

    Eosinophilic pustular folliculitis (EPF) is a sterile inflammatory dermatosis of unknown etiology. In addition to classic EPF, which affects otherwise healthy individuals, an immunocompromised state can cause immunosuppression-associated EPF (IS-EPF), which may be referred to dermatologists in inpatient services for assessments. Infancy-associated EPF (I-EPF) is the least characterized subtype, being observed mainly in non-Japanese infants. Diagnosis of EPF is challenging because its lesions mimic those of other common diseases, such as acne and dermatomycosis. Furthermore, there is no consensus regarding the treatment for each subtype of EPF. Here, we created procedure algorithms that facilitate the diagnosis and selection of therapeutic options on the basis of published work available in the public domain. Our diagnostic algorithm comprised a simple flowchart to direct physicians toward proper diagnosis. Recommended regimens were summarized in an easy-to-comprehend therapeutic algorithm for each subtype of EPF. These algorithms would facilitate the diagnostic and therapeutic procedure of EPF. © 2016 Japanese Dermatological Association.

  20. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  1. Challenges and Recent Developments in Hearing Aids: Part I. Speech Understanding in Noise, Microphone Technologies and Noise Reduction Algorithms

    PubMed Central

    Chung, King

    2004-01-01

    This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225

  2. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  3. Combinatorial Algorithms to Enable Computational Science and Engineering: Work from the CSCAPES Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric

    2015-01-16

    This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellowsmore » have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.« less

  4. Treatment Algorithms Based on Tumor Molecular Profiling: The Essence of Precision Medicine Trials.

    PubMed

    Le Tourneau, Christophe; Kamal, Maud; Tsimberidou, Apostolia-Maria; Bedard, Philippe; Pierron, Gaëlle; Callens, Céline; Rouleau, Etienne; Vincent-Salomon, Anne; Servant, Nicolas; Alt, Marie; Rouzier, Roman; Paoletti, Xavier; Delattre, Olivier; Bièche, Ivan

    2016-04-01

    With the advent of high-throughput molecular technologies, several precision medicine (PM) studies are currently ongoing that include molecular screening programs and PM clinical trials. Molecular profiling programs establish the molecular profile of patients' tumors with the aim to guide therapy based on identified molecular alterations. The aim of prospective PM clinical trials is to assess the clinical utility of tumor molecular profiling and to determine whether treatment selection based on molecular alterations produces superior outcomes compared with unselected treatment. These trials use treatment algorithms to assign patients to specific targeted therapies based on tumor molecular alterations. These algorithms should be governed by fixed rules to ensure standardization and reproducibility. Here, we summarize key molecular, biological, and technical criteria that, in our view, should be addressed when establishing treatment algorithms based on tumor molecular profiling for PM trials. © The Author 2015. Published by Oxford University Press.

  5. Radiation and scattering from bodies of translation. Volume 2: User's manual, computer program documentation

    NASA Astrophysics Data System (ADS)

    Medgyesi-Mitschang, L. N.; Putnam, J. M.

    1980-04-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.

  6. The advanced progress of precoding technology in 5g system

    NASA Astrophysics Data System (ADS)

    An, Chenyi

    2017-09-01

    With the development of technology, people began to put forward higher requirements for the mobile system, the emergence of the 5G subvert the track of the development of mobile communication technology. In the research of the core technology of 5G mobile communication, large scale MIMO, and precoding technology is a research hotspot. At present, the research on precoding technology in 5G system analyzes the various methods of linear precoding, the maximum ratio transmission (MRT) precoding algorithm, zero forcing (ZF) precoding algorithm, minimum mean square error (MMSE) precoding algorithm based on maximum signal to leakage and noise ratio (SLNR). Precoding algorithms are analyzed and summarized in detail. At the same time, we also do some research on nonlinear precoding methods, such as dirty paper precoding, THP precoding algorithm and so on. Through these analysis, we can find the advantages and disadvantages of each algorithm, as well as the development trend of each algorithm, grasp the development of the current 5G system precoding technology. Therefore, the research results and data of this paper can be used as reference for the development of precoding technology in 5G system.

  7. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  8. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  9. The GLAS Algorithm Theoretical Basis Document for Laser Footprint Location (Geolocation) and Surface Profiles

    NASA Technical Reports Server (NTRS)

    Shutz, Bob E.; Urban, Timothy J.

    2014-01-01

    This ATBD summarizes (and links with other ATBDs) the elements used to obtain the geolocated GLAS laser spot location, with respect to the Earth Center of Mass. Because of the approach used, the reference frame used to express the geolocation is linked to the reference frame used for POD and PAD, which are related to the ITRF. The geolocated spot coordinates (which includes the elevation or height, with respect to an adopted reference ellipsoid) is the inferred position of the laser spot, since the spot location is not directly measured. This document also summarizes the GLAS operation time periods.

  10. A Web-Based Search Service to Support Imaging Spectrometer Instrument Operations

    NASA Technical Reports Server (NTRS)

    Smith, Alexander; Thompson, David R.; Sayfi, Elias; Xing, Zhangfan; Castano, Rebecca

    2013-01-01

    Imaging spectrometers yield rich and informative data products, but interpreting them demands time and expertise. There is a continual need for new algorithms and methods for rapid first-draft analyses to assist analysts during instrument opera-tions. Intelligent data analyses can summarize scenes to draft geologic maps, searching images to direct op-erator attention to key features. This validates data quality while facilitating rapid tactical decision making to select followup targets. Ideally these algorithms would operate in seconds, never grow bored, and be free from observation bias about the kinds of mineral-ogy that will be found.

  11. The art and science of switching antipsychotic medications, part 2.

    PubMed

    Weiden, Peter J; Miller, Alexander L; Lambert, Tim J; Buckley, Peter F

    2007-01-01

    In the presentation "Switching and Metabolic Syndrome," Weiden summarizes reasons to switch antipsychotics, highlighting weight gain and other metabolic adverse events as recent treatment targets. In "Texas Medication Algorithm Project (TMAP)," Miller reviews the TMAP study design, discusses results related to the algorithm versus treatment as usual, and concludes with the implications of the study. Lambert's presentation, "Dosing and Titration Strategies to Optimize Patient Outcome When Switching Antipsychotic Therapy," reviews the decision-making process when switching patients' medication, addresses dosing and titration strategies to effectively transition between medications, and examines other factors to consider when switching pharmacotherapy.

  12. Dynamics Modelling of Transmission Gear Rattle and Analysis on Influence Factors

    NASA Astrophysics Data System (ADS)

    He, Xiaona; Zhang, Honghui

    2018-02-01

    Based on the vibration dynamics modeling for the single stage gear of transmission system, this paper is to understand the mechanism of transmission rattle. The dynamic model response using MATLAB and Runge-Kutta algorithm is analyzed, and the ways for reducing the rattle noise of the automotive transmission is summarized.

  13. Reasoning abstractly about resources

    NASA Technical Reports Server (NTRS)

    Clement, B.; Barrett, A.

    2001-01-01

    r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.

  14. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  15. An algorithm for solving the system-level problem in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A multilevel optimization approach which is applicable to nonhierarchic coupled systems is presented. The approach includes a general treatment of design (or behavior) constraints and coupling constraints at the discipline level through the use of norms. Three different types of norms are examined: the max norm, the Kreisselmeier-Steinhauser (KS) norm, and the 1(sub p) norm. The max norm is recommended. The approach is demonstrated on a class of hub frame structures which simulate multidisciplinary systems. The max norm is shown to produce system-level constraint functions which are non-smooth. A cutting-plane algorithm is presented which adequately deals with the resulting corners in the constraint functions. The algorithm is tested on hub frames with increasing number of members (which simulate disciplines), and the results are summarized.

  16. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  17. Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision

    NASA Astrophysics Data System (ADS)

    Rojer, Alan S.; Schwartz, Eric L.

    1991-02-01

    Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for

  18. Design options for improving protective gloves for industrial assembly work.

    PubMed

    Dianat, Iman; Haslegrave, Christine M; Stedmon, Alex W

    2014-07-01

    The study investigated the effects of wearing two new designs of cotton glove on several hand performance capabilities and compared them against the effects of barehanded, single-layered and double cotton glove conditions when working with hand tools (screwdriver and pliers). The new glove designs were based on the findings of subjective hand discomfort assessments for this type of work and aimed to match the glove thickness to the localised pressure and sensitivity in different areas of the hand as well as to provide adequate dexterity for fine manipulative tasks. The results showed that the first prototype glove and the barehanded condition were comparable and provided better dexterity and higher handgrip strength than double thickness gloves. The results support the hypothesis that selective thickness in different areas of the hand could be applied by glove manufacturers to improve the glove design, so that it can protect the hands from the environment and at the same time allow optimal hand performance capabilities. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  19. Multilevel decomposition of complete vehicle configuration in a parallel computing environment

    NASA Technical Reports Server (NTRS)

    Bhatt, Vinay; Ragsdell, K. M.

    1989-01-01

    This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.

  20. ALGORITHMS FOR ESTIMATING RESTING METABOLIC RATE AND ACTIVITY SPECIFIC VENTILATION RATES FOR USE IN COMPLEX EXPOSURE AND INTAKE DOSE MODELS

    EPA Science Inventory

    This work summarizes advancements made that allow for better estimation of resting metabolic rate (RMR) and subsequent estimation of ventilation rates (i.e., total ventilation (VE) and alveolar ventilation (VA)) for individuals of both genders and all ages. ...

  1. Optimal tree-stem bucking of northeastern species of China

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Joseph McNeel

    2004-01-01

    An application of optimal tree-stem bucking to the northeastern tree species of China is reported. The bucking procedures used in this region are summarized, which are the basic guidelines for the optimal bucking design. The directed graph approach was adopted to generate the bucking patterns by using the network analysis labeling algorithm. A computer-based bucking...

  2. An Annotated Bibliography of Current Literature Dealing with the Effective Teaching of Computer Programming in High Schools.

    ERIC Educational Resources Information Center

    Taylor, Karen A.

    This review of the literature and annotated bibliography summarizes the available research relating to teaching programming to high school students. It is noted that, while the process of programming a computer could be broken down into five steps--problem definition, algorithm design, code writing, debugging, and documentation--current research…

  3. Two MODIS Aerosol Products Over Ocean on the Terra and Aqua CERES SSF Datasets

    NASA Technical Reports Server (NTRS)

    Ignatov, Alexander; Minnis, Patrick; Loeb, Norman; Wielicki, Bruce; Miller, Walter; Sun-Mack, Sunny; Tanre, Didier; Remer, Lorraine; Laszlo, Istvan; Geier, Erika

    2004-01-01

    Over ocean, two aerosol products are reported on the Terra and Aqua CERES SSFs. Both are derived from MODIS, but using different sampling and aerosol algorithms. This study briefly summarizes these products, and compares using 2 weeks of global Terra data from 15-21 December 2000, and 1-7 June 2001.

  4. ICASE

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in the areas of (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving Langley facilities and scientists; and (4) computer science.

  5. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  6. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE PAGES

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  7. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.

    Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less

  8. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  9. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  10. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  11. Drinking Water Supply without Use of a Disinfectant

    NASA Astrophysics Data System (ADS)

    Rajnochova, Marketa; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    The paper focuses on the issue of drinking water supply without use of any disinfectants. Before the public water supply network operator begins to consider switching to operation without use of chemical disinfection, initial assessment should be made, whether or not the water supply system in question is suitable for this type of operation. The assessment is performed by applying the decision algorithm. The initial assessment is followed by another decision algorithm which serves for managing and controlling the process of switching to drinking water supply without use of a disinfectant. The paper also summarizes previous experience and knowledge of this way operated public water supply systems in the Czech Republic.

  12. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  13. Polynomial-Time Algorithms for Building a Consensus MUL-Tree

    PubMed Central

    Cui, Yun; Jansson, Jesper

    2012-01-01

    Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134

  14. Polynomial-time algorithms for building a consensus MUL-tree.

    PubMed

    Cui, Yun; Jansson, Jesper; Sung, Wing-Kin

    2012-09-01

    A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.

  15. Recognition of Protein-coding Genes Based on Z-curve Algorithms

    PubMed Central

    -Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling

    2014-01-01

    Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027

  16. Empirical algorithms for ocean optics parameters

    NASA Astrophysics Data System (ADS)

    Smart, Jeffrey H.

    2007-06-01

    As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.

  17. Review of TRMM/GPM Rainfall Algorithm Validation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2004-01-01

    A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.

  18. Rotorcraft Brownout: Advanced Understanding, Control and Mitigation

    DTIC Science & Technology

    2008-12-31

    the Gauss Seidel iterative method . The overall steps of SIMPLER algorithm can be summarized as: 1. Guess velocity field, 2. Calculate the momentum...techniques and numerical methods , and the team will begin to develop a methodology that is capable of integrating these solutions and highlighting...rotorcraft design optimization techniques will then be undertaken using the validated computational methods . 15. SUBJECT TERMS Rotorcraft

  19. GSFC Technology Development Center Report

    NASA Technical Reports Server (NTRS)

    Himwich, Ed; Gipson, John

    2013-01-01

    This report summarizes the activities of the GSFC Technology Development Center (TDC) for 2012 and forecasts planned activities for 2013. The GSFC TDC develops station software including the Field System (FS), scheduling software (SKED), hardware including tools for station timing and meteorology, scheduling algorithms, and operational procedures. It provides a pool of individuals to assist with station implementation, check-out, upgrades, and training.

  20. Scheduling language and algorithm development study. Volume 1: Study summary and overview

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A high level computer programming language and a program library were developed to be used in writing programs for scheduling complex systems such as the space transportation system. The objectives and requirements of the study are summarized and unique features of the specified language and program library are described and related to the why of the objectives and requirements.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Cheriyadat, Anil M; Bhaduri, Budhendra L

    The high rate of urbanization, political conflicts and ensuing internal displacement of population, and increased poverty in the 20th century has resulted in rapid increase of informal settlements. These unplanned, unauthorized, and/or unstructured homes, known as informal settlements, shantytowns, barrios, or slums, pose several challenges to the nations, as these settlements are often located in most hazardous regions and lack basic services. Though several World Bank and United Nations sponsored studies stress the importance of poverty maps in designing better policies and interventions, mapping slums of the world is a daunting and challenging task. In this paper, we summarize ourmore » ongoing research on settlement mapping through the utilization of Very high resolution (VHR) remote sensing imagery. Most existing approaches used to classify VHR images are single instance (or pixel-based) learning algorithms, which are inadequate for analyzing VHR imagery, as single pixels do not contain sufficient contextual information (see Figure 1). However, much needed spatial contextual information can be captured via feature extraction and/or through newer machine learning algorithms in order to extract complex spatial patterns that distinguish informal settlements from formal ones. In recent years, we made significant progress in advancing the state of art in both directions. This paper summarizes these results.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Day, David Minot; Mitchell, Scott A.

    This report summarizes the Combinatorial Algebraic Topology: software, applications & algorithms workshop (CAT Workshop). The workshop was sponsored by the Computer Science Research Institute of Sandia National Laboratories. It was organized by CSRI staff members Scott Mitchell and Shawn Martin. It was held in Santa Fe, New Mexico, August 29-30. The CAT Workshop website has links to some of the talk slides and other information, http://www.cs.sandia.gov/CSRI/Workshops/2009/CAT/index.html. The purpose of the report is to summarize the discussions and recap the sessions. There is a special emphasis on technical areas that are ripe for further exploration, and the plans for follow-up amongstmore » the workshop participants. The intended audiences are the workshop participants, other researchers in the area, and the workshop sponsors.« less

  3. EOS Laser Atmosphere Wind Sounder (LAWS) investigation

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In this final report, the set of tasks that evolved from the Laser Atmosphere Wind Sounder (LAWS) Science Team are reviewed, the major accomplishments are summarized, and a complete set of resulting references provided. The tasks included preparation of a plan for the LAWS Algorithm Development and Evolution Laboratory (LADEL); participation in the preparation of a joint CNES/NASA proposal to build a space-based DWL; involvement in the Global Backscatter Experiments (GLOBE); evaluation of several DWL concepts including 'Quick-LAWS', SPNDL and several direct detection technologies; and an extensive series of system trade studies and Observing System Simulation Experiments (OSSE's). In this report, some of the key accomplishments are briefly summarized with reference to interim reports, special reports, conference/workshop presentations, and publications.

  4. Estimating summary statistics for electronic health record laboratory data for use in high-throughput phenotyping algorithms

    PubMed Central

    Elhadad, N.; Claassen, J.; Perotte, R.; Goldstein, A.; Hripcsak, G.

    2018-01-01

    We study the question of how to represent or summarize raw laboratory data taken from an electronic health record (EHR) using parametric model selection to reduce or cope with biases induced through clinical care. It has been previously demonstrated that the health care process (Hripcsak and Albers, 2012, 2013), as defined by measurement context (Hripcsak and Albers, 2013; Albers et al., 2012) and measurement patterns (Albers and Hripcsak, 2010, 2012), can influence how EHR data are distributed statistically (Kohane and Weber, 2013; Pivovarov et al., 2014). We construct an algorithm, PopKLD, which is based on information criterion model selection (Burnham and Anderson, 2002; Claeskens and Hjort, 2008), is intended to reduce and cope with health care process biases and to produce an intuitively understandable continuous summary. The PopKLD algorithm can be automated and is designed to be applicable in high-throughput settings; for example, the output of the PopKLD algorithm can be used as input for phenotyping algorithms. Moreover, we develop the PopKLD-CAT algorithm that transforms the continuous PopKLD summary into a categorical summary useful for applications that require categorical data such as topic modeling. We evaluate our methodology in two ways. First, we apply the method to laboratory data collected in two different health care contexts, primary versus intensive care. We show that the PopKLD preserves known physiologic features in the data that are lost when summarizing the data using more common laboratory data summaries such as mean and standard deviation. Second, for three disease-laboratory measurement pairs, we perform a phenotyping task: we use the PopKLD and PopKLD-CAT algorithms to define high and low values of the laboratory variable that are used for defining a disease state. We then compare the relationship between the PopKLD-CAT summary disease predictions and the same predictions using empirically estimated mean and standard deviation to a gold standard generated by clinical review of patient records. We find that the PopKLD laboratory data summary is substantially better at predicting disease state. The PopKLD or PopKLD-CAT algorithms are not meant to be used as phenotyping algorithms, but we use the phenotyping task to show what information can be gained when using a more informative laboratory data summary. In the process of evaluation our method we show that the different clinical contexts and laboratory measurements necessitate different statistical summaries. Similarly, leveraging the principle of maximum entropy we argue that while some laboratory data only have sufficient information to estimate a mean and standard deviation, other laboratory data captured in an EHR contain substantially more information than can be captured in higher-parameter models. PMID:29369797

  5. Estimating summary statistics for electronic health record laboratory data for use in high-throughput phenotyping algorithms.

    PubMed

    Albers, D J; Elhadad, N; Claassen, J; Perotte, R; Goldstein, A; Hripcsak, G

    2018-02-01

    We study the question of how to represent or summarize raw laboratory data taken from an electronic health record (EHR) using parametric model selection to reduce or cope with biases induced through clinical care. It has been previously demonstrated that the health care process (Hripcsak and Albers, 2012, 2013), as defined by measurement context (Hripcsak and Albers, 2013; Albers et al., 2012) and measurement patterns (Albers and Hripcsak, 2010, 2012), can influence how EHR data are distributed statistically (Kohane and Weber, 2013; Pivovarov et al., 2014). We construct an algorithm, PopKLD, which is based on information criterion model selection (Burnham and Anderson, 2002; Claeskens and Hjort, 2008), is intended to reduce and cope with health care process biases and to produce an intuitively understandable continuous summary. The PopKLD algorithm can be automated and is designed to be applicable in high-throughput settings; for example, the output of the PopKLD algorithm can be used as input for phenotyping algorithms. Moreover, we develop the PopKLD-CAT algorithm that transforms the continuous PopKLD summary into a categorical summary useful for applications that require categorical data such as topic modeling. We evaluate our methodology in two ways. First, we apply the method to laboratory data collected in two different health care contexts, primary versus intensive care. We show that the PopKLD preserves known physiologic features in the data that are lost when summarizing the data using more common laboratory data summaries such as mean and standard deviation. Second, for three disease-laboratory measurement pairs, we perform a phenotyping task: we use the PopKLD and PopKLD-CAT algorithms to define high and low values of the laboratory variable that are used for defining a disease state. We then compare the relationship between the PopKLD-CAT summary disease predictions and the same predictions using empirically estimated mean and standard deviation to a gold standard generated by clinical review of patient records. We find that the PopKLD laboratory data summary is substantially better at predicting disease state. The PopKLD or PopKLD-CAT algorithms are not meant to be used as phenotyping algorithms, but we use the phenotyping task to show what information can be gained when using a more informative laboratory data summary. In the process of evaluation our method we show that the different clinical contexts and laboratory measurements necessitate different statistical summaries. Similarly, leveraging the principle of maximum entropy we argue that while some laboratory data only have sufficient information to estimate a mean and standard deviation, other laboratory data captured in an EHR contain substantially more information than can be captured in higher-parameter models. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Research progress on quantum informatics and quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Yusheng

    2018-03-01

    Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.

  7. Validation of Community Models: Identifying Events in Space Weather Model Timelines

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter

    2009-01-01

    I develop and document a set of procedures which test the quality of predictions of solar wind speed and polarity of the interplanetary magnetic field (IMF) made by coupled models of the ambient solar corona and heliosphere. The Wang-Sheeley-Arge (WSA) model is used to illustrate the application of these validation procedures. I present an algorithm which detects transitions of the solar wind from slow to high speed. I also present an algorithm which processes the measured polarity of the outward directed component of the IMF. This removes high-frequency variations to expose the longer-scale changes that reflect IMF sector changes. I apply these algorithms to WSA model predictions made using a small set of photospheric synoptic magnetograms obtained by the Global Oscillation Network Group as input to the model. The results of this preliminary validation of the WSA model (version 1.6) are summarized.

  8. The comparison and analysis of extracting video key frame

    NASA Astrophysics Data System (ADS)

    Ouyang, S. Z.; Zhong, L.; Luo, R. Q.

    2018-05-01

    Video key frame extraction is an important part of the large data processing. Based on the previous work in key frame extraction, we summarized four important key frame extraction algorithms, and these methods are largely developed by comparing the differences between each of two frames. If the difference exceeds a threshold value, take the corresponding frame as two different keyframes. After the research, the key frame extraction based on the amount of mutual trust is proposed, the introduction of information entropy, by selecting the appropriate threshold values into the initial class, and finally take a similar mean mutual information as a candidate key frame. On this paper, several algorithms is used to extract the key frame of tunnel traffic videos. Then, with the analysis to the experimental results and comparisons between the pros and cons of these algorithms, the basis of practical applications is well provided.

  9. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  10. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less

  12. Transcultural Endocrinology: Adapting Type-2 Diabetes Guidelines on a Global Scale.

    PubMed

    Nieto-Martínez, Ramfis; González-Rivas, Juan P; Florez, Hermes; Mechanick, Jeffrey I

    2016-12-01

    Type-2 diabetes (T2D) needs to be prevented and treated effectively to reduce its burden and consequences. White papers, such as evidence-based clinical practice guidelines (CPG) and their more portable versions, clinical practice algorithms and clinical checklists, may improve clinical decision-making and diabetes outcomes. However, CPG are underused and poorly validated. Protocols that translate and implement these CPG are needed. This review presents the global dimension of T2D, details the importance of white papers in the transculturalization process, compares relevant international CPG, analyzes cultural variables, and summarizes translation strategies that can improve care. Specific protocols and algorithmic tools are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  14. Tachycardia detection in ICDs by Boston Scientific : Algorithms, pearls, and pitfalls.

    PubMed

    Zanker, Norbert; Schuster, Diane; Gilkerson, James; Stein, Kenneth

    2016-09-01

    The aim of this study was to summarize how implantable cardioverter defibrillators (ICDs) by Boston Scientific sense, detect, discriminate rhythms, and classify episodes. Modern devices include multiple programming selections, diagnostic features, therapy options, memory functions, and device-related history features. Device operation includes logical steps from sensing, detection, discrimination, therapy delivery to history recording. The program is designed to facilitate the application of the device algorithms to the individual patient's clinical needs. Features and functions described in this article represent a selective excerpt by the authors from Boston Scientific publicly available product resources. Programming of ICDs may affect patient outcomes. Patient-adapted and optimized programming requires understanding of device operation and concepts.

  15. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  16. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    PubMed

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Management of patients infected with airborne-spread diseases: an algorithm for infection control professionals.

    PubMed

    Rebmann, Terri

    2005-12-01

    Many US hospitals lack the capacity to house safely a surge of potentially infectious patients, increasing the risk of secondary transmission. Respiratory protection and negative-pressure rooms are needed to prevent transmission of airborne-spread diseases, but US hospitals lack available and/or properly functioning negative-pressure rooms. Creating new rooms or retrofitting existing facilities is time-consuming and expensive. Safe methods of managing patients with airborne-spread diseases and establishing temporary negative-pressure and/or protective environments were determined by a literature review. Relevant data were analyzed and synthesized to generate a response algorithm. Ideal patient management and placement guidelines, including instructions for choosing respiratory protection and creating temporary negative-pressure or other protective environments, were delineated. Findings were summarized in a treatment algorithm. The threat of bioterrorism and emerging infections increases health care's need for negative-pressure and/or protective environments. The algorithm outlines appropriate response steps to decrease transmission risk until an ideal protective environment can be utilized. Using this algorithm will prepare infection control professionals to respond more effectively during a surge of potentially infectious patients following a bioterrorism attack or emerging infectious disease outbreak.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Hornback, Donald Eric; Johnson, Jeffrey O.

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  20. IonRayTrace: An HF Propagation Model for Communications and Radar Applications

    DTIC Science & Technology

    2014-12-01

    for modeling the impact of ionosphere variability on detection algorithms. Modification of IonRayTrace’s source code to include flexible gridding and...color denotes plasma frequency in MHz .................................................................. 6 4. Ionospheric absorption (dB) versus... Ionosphere for its environmental background [3]. IonRayTrace’s operation is summarized briefly in Section 3. However, the scope of this document is primarily

  1. Structural assembly in space

    NASA Technical Reports Server (NTRS)

    Stokes, J. W.; Pruett, E. C.

    1980-01-01

    A cost algorithm for predicting assembly costs for large space structures is given. Assembly scenarios are summarized which describe the erection, deployment, and fabrication tasks for five large space structures. The major activities that impact total costs for structure assembly from launch through deployment and assembly to scientific instrument installation and checkout are described. Individual cost elements such as assembly fixtures, handrails, or remote minipulators are also presented.

  2. Reduced Kalman Filters for Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    2011-01-01

    This paper summarizes the author's work ontimescales based on Kalman filters that act upon the clock comparisons. The natural Kalman timescale algorithm tends to optimize long-term timescale stability at the expense of short-term stability. By subjecting each post-measurement error covariance matrix to a non-transparent reduction operation, one obtains corrected clocks with improved short-term stability and little sacrifice of long-term stability.

  3. Causal diagrams and multivariate analysis III: confound it!

    PubMed

    Jupiter, Daniel C

    2015-01-01

    This commentary concludes my series concerning inclusion of variables in multivariate analyses. We take up the issues of confounding and effect modification and summarize the work we have thus far done. Finally, we provide a rough algorithm to help guide us through the maze of possibilities that we have outlined. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Empirical study of seven data mining algorithms on different characteristics of datasets for biomedical classification applications.

    PubMed

    Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi

    2017-11-02

    Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.

  5. Development of an upwind, finite-volume code with finite-rate chemistry

    NASA Technical Reports Server (NTRS)

    Molvik, Gregory A.

    1995-01-01

    Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data. This report summarizes the research that took place from August 1,1994 to January 1, 1995.

  6. Automatic identification of comparative effectiveness research from Medline citations to support clinicians’ treatment information needs

    PubMed Central

    Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo

    2014-01-01

    Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677

  7. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  8. Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis

    NASA Technical Reports Server (NTRS)

    Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo

    2010-01-01

    The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.

  9. Gene selection heuristic algorithm for nutrigenomics studies.

    PubMed

    Valour, D; Hue, I; Grimard, B; Valour, B

    2013-07-15

    Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.

  10. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  11. An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.

    1999-01-01

    A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.

  12. A geometric approach to identify cavities in particle systems

    NASA Astrophysics Data System (ADS)

    Voyiatzis, Evangelos; Böhm, Michael C.; Müller-Plathe, Florian

    2015-11-01

    The implementation of a geometric algorithm to identify cavities in particle systems in an open-source python program is presented. The algorithm makes use of the Delaunay space tessellation. The present python software is based on platform-independent tools, leading to a portable program. Its successful execution provides information concerning the accessible volume fraction of the system, the size and shape of the cavities and the group of atoms forming each of them. The program can be easily incorporated into the LAMMPS software. An advantage of the present algorithm is that no a priori assumption on the cavity shape has to be made. As an example, the cavity size and shape distributions in a polyethylene melt system are presented for three spherical probe particles. This paper serves also as an introductory manual to the script. It summarizes the algorithm, its implementation, the required user-defined parameters as well as the format of the input and output files. Additionally, we demonstrate possible applications of our approach and compare its capability with the ones of well documented cavity size estimators.

  13. A systematic review of validated methods for identifying hypersensitivity reactions other than anaphylaxis (fever, rash, and lymphadenopathy), using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of hypersensitivity reactions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the hypersensitivity reactions of health outcomes of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify hypersensitivity reactions and including validation estimates of the coding algorithms. We identified five studies that provided validated hypersensitivity-reaction algorithms. Algorithm positive predictive values (PPVs) for various definitions of hypersensitivity reactions ranged from 3% to 95%. PPVs were high (i.e. 90%-95%) when both exposures and diagnoses were very specific. PPV generally decreased when the definition of hypersensitivity was expanded, except in one study that used data mining methodology for algorithm development. The ability of coding algorithms to identify hypersensitivity reactions varied, with decreasing performance occurring with expanded outcome definitions. This examination of hypersensitivity-reaction coding algorithms provides an example of surveillance bias resulting from outcome definitions that include mild cases. Data mining may provide tools for algorithm development for hypersensitivity and other health outcomes. Research needs to be conducted on designing validation studies to test hypersensitivity-reaction algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Foundations of Sequential Learning

    DTIC Science & Technology

    2018-02-01

    is not sufficient to estimate the payoff. The learned must use a strategy that balances exploration (learning new information by pulling arms that...specifications, or other data does not license the holder or any other person or corporation ; or convey any rights or permission to manufacture, use, or sell any...ABSTRACT This report summarizes the research done under FA8750-16-2-0173. This research advanced understanding of bandit algorithms and exploration in

  15. MSS D Multispectral Scanner System

    NASA Technical Reports Server (NTRS)

    Lauletta, A. M.; Johnson, R. L.; Brinkman, K. L. (Principal Investigator)

    1982-01-01

    The development and acceptance testing of the 4-band Multispectral Scanners to be flown on LANDSAT D and LANDSAT D Earth resources satellites are summarized. Emphasis is placed on the acceptance test phase of the program. Test history and acceptance test algorithms are discussed. Trend data of all the key performance parameters are included and discussed separately for each of the two multispectral scanner instruments. Anomalies encountered and their resolutions are included.

  16. Pediatric Benign Soft Tissue Oral and Maxillofacial Pathology.

    PubMed

    Glickman, Alexandra; Karlis, Vasiliki

    2016-02-01

    Despite the many types of oral pathologic lesions found in infants and children, the most commonly encountered are benign soft tissue lesions. The clinical features, diagnostic criteria, and treatment algorithms of pathologies in the age group from birth to 18 years of age are summarized based on their prevalence in each given age distribution. Treatment modalities include both medical and surgical management. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  18. Bridging the semantic gap in sports

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim

    2003-01-01

    One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.

  19. Study of efficient video compression algorithms for space shuttle applications

    NASA Technical Reports Server (NTRS)

    Poo, Z.

    1975-01-01

    Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.

  20. Pipelining in structural health monitoring wireless sensor network

    NASA Astrophysics Data System (ADS)

    Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim

    2010-04-01

    Application of wireless sensor network (WSN) for structural health monitoring (SHM), is becoming widespread due to its implementation ease and economic advantage over traditional sensor networks. Beside advantages that have made wireless network preferable, there are some concerns regarding their performance in some applications. In long-span Bridge monitoring the need to transfer data over long distance causes some challenges in design of WSN platforms. Due to the geometry of bridge structures, using multi-hop data transfer between remote nodes and base station is essential. This paper focuses on the performances of pipelining algorithms. We summarize several prevent pipelining approaches, discuss their performances, and propose a new pipelining algorithm, which gives consideration to both boosting of channel usage and the simplicity in deployment.

  1. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  2. Investigation of cloud/water vapor motion winds from geostationary satellite

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This report summarizes the research work accomplished on the NASA grant contract NAG8-892 during 1992. Research goals of this contract are the following: to complete upgrades to the Cooperative Institute for Meteorological Satellite Studies (CIMSS) wind system procedures for assigning heights and incorporating first guess information; to evaluate these modifications using simulated tracer fields; to add an automated quality control system to minimize the need for manual editing, while maintaining product quality; and to benchmark the upgraded algorithm in tests with NMC and/or MSFC. Work progressed on all these tasks and is detailed. This work was done in collaboration with CIMSS NOAA/NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.

  3. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review

    PubMed Central

    Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.

    2015-01-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561

  4. New clinical grading scales and objective measurement for conjunctival injection.

    PubMed

    Park, In Ki; Chun, Yeoun Sook; Kim, Kwang Gi; Yang, Hee Kyung; Hwang, Jeong-Min

    2013-08-05

    To establish a new clinical grading scale and objective measurement method to evaluate conjunctival injection. Photographs of conjunctival injection with variable ocular diseases in 429 eyes were reviewed. Seventy-three images with concordance by three ophthalmologists were classified into a 4-step and 10-step subjective grading scale, and used as standard photographs. Each image was quantified in four ways: the relative magnitude of the redness component of each red-green-blue (RGB) pixel; two different algorithms based on the occupied area by blood vessels (K-means clustering with LAB color model and contrast-limited adaptive histogram equalization [CLAHE] algorithm); and the presence of blood vessel edges, based on the Canny edge-detection algorithm. Area under the receiver operating characteristic curves (AUCs) were calculated to summarize diagnostic accuracies of the four algorithms. The RGB color model, K-means clustering with LAB color model, and CLAHE algorithm showed good correlation with the clinical 10-step grading scale (R = 0.741, 0.784, 0.919, respectively) and with the clinical 4-step grading scale (R = 0.645, 0.702, 0.838, respectively). The CLAHE method showed the largest AUC, best distinction power (P < 0.001, ANOVA, Bonferroni multiple comparison test), and high reproducibility (R = 0.996). CLAHE algorithm showed the best correlation with the 10-step and 4-step subjective clinical grading scales together with high distinction power and reproducibility. CLAHE algorithm can be a useful for method for assessment of conjunctival injection.

  5. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  7. Opinion Summarizationof CustomerComments

    NASA Astrophysics Data System (ADS)

    Fan, Miao; Wu, Guoshi

    Web 2.0 technologies have enabled more and more customers to freely comment on different kinds of entities, such as sellers, products and services. The large scale of information poses the need and challenge of automatic summarization. In many cases, each of the user-generated short comments implies the opinions which rate the target entity. In this paper, we aim to mine and to summarize all the customer comments of a product. The algorithm proposed in this researchis more reliable on opinion identification because it is unsupervised and the accuracy of the result improves as the number of comments increases. Our research is performed in four steps: (1) mining the frequent aspects of a product that have been commented on by customers; (2) mining the infrequent aspects of a product which have been commented by customers (3) identifying opinion words in each comment and deciding whether each opinion word is positive, negative or neutral; (4) summarizing the comments. This paper proposes several novel techniques to perform these tasks. Our experimental results using comments of a number of products sold online demonstrate the effectiveness of the techniques.

  8. Summarizing Audiovisual Contents of a Video Program

    NASA Astrophysics Data System (ADS)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  9. Framework for a space shuttle main engine health monitoring system

    NASA Technical Reports Server (NTRS)

    Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey

    1990-01-01

    A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.

  10. Contributed Review: Source-localization algorithms and applications using time of arrival and time difference of arrival measurements

    NASA Astrophysics Data System (ADS)

    Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.

    2016-04-01

    Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.

  11. Generating Global Leaf Area Index from Landsat: Algorithm Formulation and Demonstration

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Nemani, Ramakrishna R.; Zhang, Gong; Hashimoto, Hirofumi; Milesi, Cristina; Michaelis, Andrew; Wang, Weile; Votava, Petr; Samanta, Arindam; Melton, Forrest; hide

    2012-01-01

    This paper summarizes the implementation of a physically based algorithm for the retrieval of vegetation green Leaf Area Index (LAI) from Landsat surface reflectance data. The algorithm is based on the canopy spectral invariants theory and provides a computationally efficient way of parameterizing the Bidirectional Reflectance Factor (BRF) as a function of spatial resolution and wavelength. LAI retrievals from the application of this algorithm to aggregated Landsat surface reflectances are consistent with those of MODIS for homogeneous sites represented by different herbaceous and forest cover types. Example results illustrating the physics and performance of the algorithm suggest three key factors that influence the LAI retrieval process: 1) the atmospheric correction procedures to estimate surface reflectances; 2) the proximity of Landsatobserved surface reflectance and corresponding reflectances as characterized by the model simulation; and 3) the quality of the input land cover type in accurately delineating pure vegetated components as opposed to mixed pixels. Accounting for these factors, a pilot implementation of the LAI retrieval algorithm was demonstrated for the state of California utilizing the Global Land Survey (GLS) 2005 Landsat data archive. In a separate exercise, the performance of the LAI algorithm over California was evaluated by using the short-wave infrared band in addition to the red and near-infrared bands. Results show that the algorithm, while ingesting the short-wave infrared band, has the ability to delineate open canopies with understory effects and may provide useful information compared to a more traditional two-band retrieval. Future research will involve implementation of this algorithm at continental scales and a validation exercise will be performed in evaluating the accuracy of the 30-m LAI products at several field sites. ©

  12. Heading Toward Launch with the Integrated Multi-Satellite Retrievals for GPM (IMERG)

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.

    2012-01-01

    The Day-l algorithm for computing combined precipitation estimates in GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). We plan for the period of record to encompass both the TRMM and GPM eras, and the coverage to extend to fully global as experience is gained in the difficult high-latitude environment. IMERG is being developed as a unified U.S. algorithm that takes advantage of strengths in the three groups that are contributing expertise: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA), which addresses inter-satellite calibration of precipitation estimates and monthly scale combination of satellite and gauge analyses; 2) the CPC Morphing algorithm with Kalman Filtering (KF-CMORPH), which provides quality-weighted time interpolation of precipitation patterns following cloud motion; and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS), which provides a neural-network-based scheme for generating microwave-calibrated precipitation estimates from geosynchronous infrared brightness temperatures. In this talk we summarize the major building blocks and important design issues driven by user needs and practical data issues. One concept being pioneered by the IMERG team is that the code system should produce estimates for the same time period but at different latencies to support the requirements of different groups of users. Another user requirement is that all these runs must be reprocessed as new IMERG versions are introduced. IMERG's status at meeting time will be summarized, and the processing scenario in the transition from TRMM to GPM will be laid out. Initially, IMERG will be run with TRMM-based calibration, and then a conversion to a GPM-based calibration will be employed after the GPM sensor products are validated. A complete reprocessing will be computed, which will complete the transition from TMPA.

  13. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  14. Assessing semantic similarity of texts - Methods and algorithms

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  15. Theoretical and experimental study on near infrared time-resolved optical diffuse tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Huijuan; Gao, Feng; Tanikawa, Yukari; Yamada, Yukio

    2006-08-01

    Parts of the works of our group in the past five years on near infrared time-resolved (TR) optical tomography are summarized in this paper. The image reconstruction algorithm is based on Newton Raphson scheme with a datatype R generated from modified Generalized Pulse Spectrum Technique. Firstly, the algorithm is evaluated with simulated data from a 2-D model and the datatype R is compared with other popularly used datatypes. In this second part of the paper, the in vitro and in vivo NIR DOT imaging on a chicken leg and a human forearm, respectively are presented for evaluating both the image reconstruction algorithm and the TR measurement system. The third part of this paper is about the differential pathlength factor of human head while monitoring head activity with NIRS and applying the modified Lambert-Beer law. Benefiting from the TR system, the measured DPF maps of the three import areas of human head are presented in this paper.

  16. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1993-01-01

    Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.

  17. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewestmore » floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.« less

  18. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  19. A graphical automated detection system to locate hardwood log surface defects using high-resolution three-dimensional laser scan data

    Treesearch

    Liya Thomas; R. Edward Thomas

    2011-01-01

    We have developed an automated defect detection system and a state-of-the-art Graphic User Interface (GUI) for hardwood logs. The algorithm identifies defects at least 0.5 inch high and at least 3 inches in diameter on barked hardwood log and stem surfaces. To summarize defect features and to build a knowledge base, hundreds of defects were measured, photographed, and...

  20. SEASAT A satellite scatterometer

    NASA Technical Reports Server (NTRS)

    Bianchi, R.; Heath, A.; Marsh, S.; Borusiewicz, J.

    1978-01-01

    The analyses performed in the early period of the program which formed the basis of the sensor design is reviewed, along with the sensor design. The test program is outlined, listing all tests performed and the environmental exposure (simulated) for each, as applicable. Ground support equipment designed and built for assembly integration and field testing is described. The software developed during the program and the algorithms/flow diagrams which formed the bases for the software are summarized.

  1. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  2. Automating the design of scientific computing software

    NASA Technical Reports Server (NTRS)

    Kant, Elaine

    1992-01-01

    SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.

  3. China Report, Science and Technology, No. 188

    DTIC Science & Technology

    1983-02-18

    parenthetical notes within the body of an item originate with the source. Times within items are as given by source. The contents of this publication in...the model-oriented algorithm and, especially,.this system does not need the time -consuming work of mathematical statistics during summarization of the...the fact that the users can see the samples at the exibit, several times the number of expected orders have been taken to demon- strate that a

  4. Understanding of the Cyber Security and the Development of CAPTCHA

    NASA Astrophysics Data System (ADS)

    Yang, Yu

    2018-04-01

    CAPTCHA is the abbreviation of "Completely Automated Public Turing Test to Tell Computers and Humans Apart", which is a program algorithm for distinguishing between computers and humans. It is able to generate and evaluate tests that are easy for human to pass yet are not possible for computers to. Common CAPTCHA generally contains symbols, text, pictures, and even videos, which is mainly used for human-computer verification. With the popularization of the Internet and its related applications, many malicious attacks against websites, systems and servers gradually appear. Therefore, the research on CAPTCHA is especially important. This article will briefly summarize and introduce the existing CAPTCHA technology, and summarizes the common problems of network attacks and information security. After listing the common type of CAPTCHA, it will finally propose feasible suggestions for the development of CAPTCHA.

  5. Developing Wide-Field Spatio-Spectral Interferometry for Far-Infrared Space Applications

    NASA Technical Reports Server (NTRS)

    Leisawitz, David; Bolcar, Matthew R.; Lyon, Richard G.; Maher, Stephen F.; Memarsadeghi, Nargess; Rinehart, Stephen A.; Sinukoff, Evan J.

    2012-01-01

    Interferometry is an affordable way to bring the benefits of high resolution to space far-IR astrophysics. We summarize an ongoing effort to develop and learn the practical limitations of an interferometric technique that will enable the acquisition of high-resolution far-IR integral field spectroscopic data with a single instrument in a future space-based interferometer. This technique was central to the Space Infrared Interferometric Telescope (SPIRIT) and Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) space mission design concepts, and it will first be used on the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). Our experimental approach combines data from a laboratory optical interferometer (the Wide-field Imaging Interferometry Testbed, WIIT), computational optical system modeling, and spatio-spectral synthesis algorithm development. We summarize recent experimental results and future plans.

  6. Numerical method for predicting flow characteristics and performance of nonaxisymmetric nozzles. Part 2: Applications

    NASA Technical Reports Server (NTRS)

    Thomas, P. D.

    1980-01-01

    A computer implemented numerical method for predicting the flow in and about an isolated three dimensional jet exhaust nozzle is summarized. The approach is based on an implicit numerical method to solve the unsteady Navier-Stokes equations in a boundary conforming curvilinear coordinate system. Recent improvements to the original numerical algorithm are summarized. Equations are given for evaluating nozzle thrust and discharge coefficient in terms of computed flowfield data. The final formulation of models that are used to simulate flow turbulence effect is presented. Results are presented from numerical experiments to explore the effect of various quantities on the rate of convergence to steady state and on the final flowfield solution. Detailed flowfield predictions for several two and three dimensional nozzle configurations are presented and compared with wind tunnel experimental data.

  7. Summary on several key techniques in 3D geological modeling.

    PubMed

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  8. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  9. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  10. The Ensemble Kalman filter: a signal processing perspective

    NASA Astrophysics Data System (ADS)

    Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik

    2017-12-01

    The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.

  11. Plan of Action for Inherited Cardiovascular Diseases: Synthesis of Recommendations and Action Algorithms.

    PubMed

    Barriales-Villa, Roberto; Gimeno-Blanes, Juan Ramón; Zorio-Grima, Esther; Ripoll-Vera, Tomás; Evangelista-Masip, Artur; Moya-Mitjans, Angel; Serratosa-Fernández, Luis; Albert-Brotons, Dimpna C; García-Pinilla, José Manuel; García-Pavía, Pablo

    2016-03-01

    The term inherited cardiovascular disease encompasses a group of cardiovascular diseases (cardiomyopathies, channelopathies, certain aortic diseases, and other syndromes) with a number of common characteristics: they have a genetic basis, a familial presentation, a heterogeneous clinical course, and, finally, can all be associated with sudden cardiac death. The present document summarizes some important concepts related to recent advances in sequencing techniques and understanding of the genetic bases of these diseases. We propose diagnostic algorithms and clinical practice recommendations and discuss controversial aspects of current clinical interest. We highlight the role of multidisciplinary referral units in the diagnosis and treatment of these conditions. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  12. A survey of artificial immune system based intrusion detection.

    PubMed

    Yang, Hua; Li, Tao; Hu, Xinlei; Wang, Feng; Zou, Yang

    2014-01-01

    In the area of computer security, Intrusion Detection (ID) is a mechanism that attempts to discover abnormal access to computers by analyzing various interactions. There is a lot of literature about ID, but this study only surveys the approaches based on Artificial Immune System (AIS). The use of AIS in ID is an appealing concept in current techniques. This paper summarizes AIS based ID methods from a new view point; moreover, a framework is proposed for the design of AIS based ID Systems (IDSs). This framework is analyzed and discussed based on three core aspects: antibody/antigen encoding, generation algorithm, and evolution mode. Then we collate the commonly used algorithms, their implementation characteristics, and the development of IDSs into this framework. Finally, some of the future challenges in this area are also highlighted.

  13. Modeling and predicting abstract concept or idea introduction and propagation through geopolitical groups

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.

  14. The Proteus Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.

  15. Applying MDA to SDR for Space to Model Real-time Issues

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2007-01-01

    NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.

  16. Preventing Exertional Death in Military Trainees: Recommendations and Treatment Algorithms From a Multidisciplinary Working Group.

    PubMed

    Webber, Bryant J; Casa, Douglas J; Beutler, Anthony I; Nye, Nathaniel S; Trueblood, Wesley E; O'Connor, Francis G

    2016-04-01

    Despite aggressive prevention programs and strategies, nontraumatic exertional sudden death events in military training continue to prove a difficult challenge for the Department of Defense. In November 2014, the 559th Medical Group at Joint Base San Antonio-Lackland, Texas, hosted a working group on sudden exertional death in military training. Their objectives were three-fold: (1) determine best practices to prevent sudden exertional death of military trainees, (2) determine best practices to establish safe and ethical training environments for military trainees with sickle cell trait, and (3) develop field-ready algorithms for managing military trainees who collapse during exertion. This article summarizes the major findings and recommendations of the working group. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  17. Tree tensor network approach to simulating Shor's algorithm

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Eugene

    2017-12-01

    Constructively simulating quantum systems furthers our understanding of qualitative and quantitative features which may be analytically intractable. In this paper, we directly simulate and explore the entanglement structure present in the paradigmatic example for exponential quantum speedups: Shor's algorithm. To perform our simulation, we construct a dynamic tree tensor network which manifestly captures two salient circuit features for modular exponentiation. These are the natural two-register bipartition and the invariance of entanglement with respect to permutations of the top-register qubits. Our construction help identify the entanglement entropy properties, which we summarize by a scaling relation. Further, the tree network is efficiently projected onto a matrix product state from which we efficiently execute the quantum Fourier transform. Future simulation of quantum information states with tensor networks exploiting circuit symmetries is discussed.

  18. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less

  19. Machine-Learning Algorithms to Code Public Health Spending Accounts

    PubMed Central

    Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David

    2017-01-01

    Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation. PMID:28363034

  20. Dynamic vehicle routing with time windows in theory and practice.

    PubMed

    Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael

    2017-01-01

    The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.

  1. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    PubMed

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  2. Linear SFM: A hierarchical approach to solving structure-from-motion problems by decoupling the linear and nonlinear components

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini

    2018-07-01

    This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.

  3. A systematic review of validated methods for identifying erythema multiforme major/minor/not otherwise specified, Stevens-Johnson Syndrome, or toxic epidermal necrolysis using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of erythema multiforme and related conditions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the erythema multiforme HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles that used administrative and claims data to identify erythema multiforme, Stevens-Johnson syndrome, or toxic epidermal necrolysis and that included validation estimates of the coding algorithms. Our search revealed limited literature focusing on erythema multiforme and related conditions that provided administrative and claims data-based algorithms and validation estimates. Only four studies provided validated algorithms and all studies used the same International Classification of Diseases code, 695.1. Approximately half of cases subjected to expert review were consistent with erythema multiforme and related conditions. Updated research needs to be conducted on designing validation studies that test algorithms for erythema multiforme and related conditions and that take into account recent changes in the diagnostic coding of these diseases. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Parameter identification for nonlinear aerodynamic systems

    NASA Technical Reports Server (NTRS)

    Pearson, Allan E.

    1991-01-01

    Work continues on frequency analysis for transfer function identification, both with respect to the continued development of the underlying algorithms and in the identification study of two physical systems. Some new results of a theoretical nature were recently obtained that lend further insight into the frequency domain interpretation of the research. Progress in each of those areas is summarized. Although not related to the system identification problem, some new results were obtained on the feedback stabilization of linear time lag systems.

  5. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  6. Boundaries and Topological Algorithms

    DTIC Science & Technology

    1988-08-01

    phrases using "for" to measure an amount of time, as illustrated by Sentences 9-11: (9) The aide shredded incriminating documents for several minutes . ( 10 ...Bonnie passed her area exam for several minutes . (11) #Eric made a fresh pot of coffee for several minutes . 10 State changes are distinguished from...data are summarized by Poggio and Poggio (1984). Mea- sured values for Panum’s area seem to be approximately ± 10 minutes of arc in both the

  7. Systematic generation of multibody equations of motion suitable for recursive and parallel manipulation

    NASA Technical Reports Server (NTRS)

    Nikravesh, Parviz E.; Gim, Gwanghum; Arabyan, Ara; Rein, Udo

    1989-01-01

    The formulation of a method known as the joint coordinate method for automatic generation of the equations of motion for multibody systems is summarized. For systems containing open or closed kinematic loops, the equations of motion can be reduced systematically to a minimum number of second order differential equations. The application of recursive and nonrecursive algorithms to this formulation, computational considerations and the feasibility of implementing this formulation on multiprocessor computers are discussed.

  8. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  9. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  10. A systematic review of validated methods for identifying pulmonary fibrosis and interstitial lung disease using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Review and Analysis of Algorithmic Approaches Developed for Prognostics on CMAPSS Dataset

    NASA Technical Reports Server (NTRS)

    Ramasso, Emannuel; Saxena, Abhinav

    2014-01-01

    Benchmarking of prognostic algorithms has been challenging due to limited availability of common datasets suitable for prognostics. In an attempt to alleviate this problem several benchmarking datasets have been collected by NASA's prognostic center of excellence and made available to the Prognostics and Health Management (PHM) community to allow evaluation and comparison of prognostics algorithms. Among those datasets are five C-MAPSS datasets that have been extremely popular due to their unique characteristics making them suitable for prognostics. The C-MAPSS datasets pose several challenges that have been tackled by different methods in the PHM literature. In particular, management of high variability due to sensor noise, effects of operating conditions, and presence of multiple simultaneous fault modes are some factors that have great impact on the generalization capabilities of prognostics algorithms. More than 70 publications have used the C-MAPSS datasets for developing data-driven prognostic algorithms. The C-MAPSS datasets are also shown to be well-suited for development of new machine learning and pattern recognition tools for several key preprocessing steps such as feature extraction and selection, failure mode assessment, operating conditions assessment, health status estimation, uncertainty management, and prognostics performance evaluation. This paper summarizes a comprehensive literature review of publications using C-MAPSS datasets and provides guidelines and references to further usage of these datasets in a manner that allows clear and consistent comparison between different approaches.

  12. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  13. Prognostic Physiology: Modeling Patient Severity in Intensive Care Units Using Radial Domain Folding

    PubMed Central

    Joshi, Rohit; Szolovits, Peter

    2012-01-01

    Real-time scalable predictive algorithms that can mine big health data as the care is happening can become the new “medical tests” in critical care. This work describes a new unsupervised learning approach, radial domain folding, to scale and summarize the enormous amount of data collected and to visualize the degradations or improvements in multiple organ systems in real time. Our proposed system is based on learning multi-layer lower dimensional abstractions from routinely generated patient data in modern Intensive Care Units (ICUs), and is dramatically different from most of the current work being done in ICU data mining that rely on building supervised predictive models using commonly measured clinical observations. We demonstrate that our system discovers abstract patient states that summarize a patient’s physiology. Further, we show that a logistic regression model trained exclusively on our learned layer outperforms a customized SAPS II score on the mortality prediction task. PMID:23304406

  14. Summary on Several Key Techniques in 3D Geological Modeling

    PubMed Central

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029

  15. Abstract Representations of Object-Directed Action in the Left Inferior Parietal Lobule.

    PubMed

    Chen, Quanjing; Garcea, Frank E; Jacobs, Robert A; Mahon, Bradford Z

    2018-06-01

    Prior neuroimaging and neuropsychological research indicates that the left inferior parietal lobule in the human brain is a critical substrate for representing object manipulation knowledge. In the present functional MRI study we used multivoxel pattern analyses to test whether action similarity among objects can be decoded in the inferior parietal lobule independent of the task applied to objects (identification or pantomime) and stimulus format in which stimuli are presented (pictures or printed words). Participants pantomimed the use of objects, cued by printed words, or identified pictures of objects. Classifiers were trained and tested across task (e.g., training data: pantomime; testing data: identification), stimulus format (e.g., training data: word format; testing format: picture) and specific objects (e.g., training data: scissors vs. corkscrew; testing data: pliers vs. screwdriver). The only brain region in which action relations among objects could be decoded across task, stimulus format and objects was the inferior parietal lobule. By contrast, medial aspects of the ventral surface of the left temporal lobe represented object function, albeit not at the same level of abstractness as actions in the inferior parietal lobule. These results suggest compulsory access to abstract action information in the inferior parietal lobe even when simply identifying objects.

  16. Prototype to measure bracket debonding force in vivo.

    PubMed

    Tonus, Jéssika Lagni; Manfroi, Fernanda Borguetti; Borges, Gilberto Antonio; Grigolo, Eduardo Correa; Helegda, Sérgio; Spohr, Ana Maria

    2017-02-01

    Material biodegradation that occurs in the mouth may interfere in the bonding strength between the bracket and the enamel, causing lower bond strength values in vivo, in comparison with in vitro studies. To develop a prototype to measure bracket debonding force in vivo and to evaluate, in vitro, the bond strength obtained with the prototype. A original plier (3M Unitek) was modified by adding one strain gauge directly connected to its claw. An electronic circuit performed the reading of the strain gauge, and the software installed in a computer recorded the values of the bracket debonding force, in kgf. Orthodontic brackets were bonded to the facial surface of 30 bovine incisors with adhesive materials. In Group 1 (n = 15), debonding was carried out with the prototype, while tensile bond strength testing was performed in Group 2 (n = 15). A universal testing machine was used for the second group. The adhesive remnant index (ARI) was recorded. According to Student's t test (α = 0.05), Group 1 (2.96 MPa) and Group 2 (3.08 MPa) were not significantly different. ARI score of 3 was predominant in the two groups. The prototype proved to be reliable for obtaining in vivo bond strength values for orthodontic brackets.

  17. Towards feasible and effective predictive wavefront control for adaptive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A; Veran, J

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  18. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  19. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel II. Distribution functions and moments.

    PubMed

    Langenbucher, Frieder

    2003-01-01

    MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.

  20. NCCN Guidelines Insights: Head and Neck Cancers, Version 1.2018.

    PubMed

    Colevas, A Dimitrios; Yom, Sue S; Pfister, David G; Spencer, Sharon; Adelstein, David; Adkins, Douglas; Brizel, David M; Burtness, Barbara; Busse, Paul M; Caudell, Jimmy J; Cmelak, Anthony J; Eisele, David W; Fenton, Moon; Foote, Robert L; Gilbert, Jill; Gillison, Maura L; Haddad, Robert I; Hicks, Wesley L; Hitchcock, Ying J; Jimeno, Antonio; Leizman, Debra; Maghami, Ellie; Mell, Loren K; Mittal, Bharat B; Pinto, Harlan A; Ridge, John A; Rocco, James; Rodriguez, Cristina P; Shah, Jatin P; Weber, Randal S; Witek, Matthew; Worden, Frank; Zhen, Weining; Burns, Jennifer L; Darlow, Susan D

    2018-05-01

    The NCCN Guidelines for Head and Neck (H&N) Cancers provide treatment recommendations for cancers of the lip, oral cavity, pharynx, larynx, ethmoid and maxillary sinuses, and salivary glands. Recommendations are also provided for occult primary of the H&N, and separate algorithms have been developed by the panel for very advanced H&N cancers. These NCCN Guidelines Insights summarize the panel's discussion and most recent recommendations regarding evaluation and treatment of nasopharyngeal carcinoma. Copyright © 2018 by the National Comprehensive Cancer Network.

  1. Modeling and predicting community responses to events using cultural demographics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.

    2007-04-01

    This paper describes a novel capability for modeling and predicting community responses to events (specifically military operations) related to demographics. Demographics in the form of words and/or numbers are used. As an example, State of Alabama annual demographic data for retail sales, auto registration, wholesale trade, shopping goods, and population were used; from which we determined a ranked estimate of the sensitivity of the demographic parameters on the cultural group response. Our algorithm and results are summarized in this paper.

  2. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  3. Static Analysis Using Abstract Interpretation

    NASA Technical Reports Server (NTRS)

    Arthaud, Maxime

    2017-01-01

    Short presentation about static analysis and most particularly abstract interpretation. It starts with a brief explanation on why static analysis is used at NASA. Then, it describes the IKOS (Inference Kernel for Open Static Analyzers) tool chain. Results on NASA projects are shown. Several well known algorithms from the static analysis literature are then explained (such as pointer analyses, memory analyses, weak relational abstract domains, function summarization, etc.). It ends with interesting problems we encountered (such as C++ analysis with exception handling, or the detection of integer overflow).

  4. Semantic Mapping and Motion Planning with Turtlebot Roomba

    NASA Astrophysics Data System (ADS)

    Aslam Butt, Rizwan; Usman Ali, Syed M.

    2013-12-01

    In this paper, we have successfully demonstrated the semantic mapping and motion planning experiments on Turtlebot Robot using Microsoft Kinect in ROS environment. Moreover, we have also performed the comparative studies on various sampling based motion planning algorithms with Turtlebot in Open Motion Planning Library. Our comparative analysis revealed that Expansive Space Trees (EST) surmounted all other approaches with respect to memory occupation and processing time. We have also tried to summarize the related concepts of autonomous robotics which we hope would be helpful for beginners.

  5. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  6. Research in applied mathematics, numerical analysis, and computer science

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  7. Integration of Computational Geometry, Finite Element, and Multibody System Algorithms for the Development of New Computational Methodology for High-Fidelity Vehicle Systems Modeling and Simulation

    DTIC Science & Technology

    2013-04-11

    vehicle dynamics. Unclassified Unclassified Unclassified UU 9 Dr. Paramsothy Jayakumar (586) 282-4896 Computational Dynamics Inc.   0    Name of...Technical Representative Dr. Paramsothy Jayakumar TARDEC Computational Dynamics Inc.   1    Project Summary This project aims at addressing and...applications. This literature review is being summarized and incorporated into the paper. The commentary provided by Dr. Jayakumar was addressed and

  8. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  9. Collaborative Policies and Assured Information Sharing

    DTIC Science & Technology

    2013-09-12

    behavior (e.g., HHS audits,  data   breach  disclosure  laws).  We designed models and algorithms for risk management in healthcare organizations in  settings...and  data   breach  notification  laws. A specific result published at IJCAI 2013  is summarized  below: Effective enforcement of laws and policies requires

  10. A Survey of Artificial Immune System Based Intrusion Detection

    PubMed Central

    Li, Tao; Hu, Xinlei; Wang, Feng; Zou, Yang

    2014-01-01

    In the area of computer security, Intrusion Detection (ID) is a mechanism that attempts to discover abnormal access to computers by analyzing various interactions. There is a lot of literature about ID, but this study only surveys the approaches based on Artificial Immune System (AIS). The use of AIS in ID is an appealing concept in current techniques. This paper summarizes AIS based ID methods from a new view point; moreover, a framework is proposed for the design of AIS based ID Systems (IDSs). This framework is analyzed and discussed based on three core aspects: antibody/antigen encoding, generation algorithm, and evolution mode. Then we collate the commonly used algorithms, their implementation characteristics, and the development of IDSs into this framework. Finally, some of the future challenges in this area are also highlighted. PMID:24790549

  11. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.

  12. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation

    PubMed Central

    Masmoudi, Mohamed Slim; Masmoudi, Mohamed

    2016-01-01

    This paper describes the design and the implementation of a trajectory tracking controller using fuzzy logic for mobile robot to navigate in indoor environments. Most of the previous works used two independent controllers for navigation and avoiding obstacles. The main contribution of the paper can be summarized in the fact that we use only one fuzzy controller for navigation and obstacle avoidance. The used mobile robot is equipped with DC motor, nine infrared range (IR) sensors to measure the distance to obstacles, and two optical encoders to provide the actual position and speeds. To evaluate the performances of the intelligent navigation algorithms, different trajectories are used and simulated using MATLAB software and SIMIAM navigation platform. Simulation results show the performances of the intelligent navigation algorithms in terms of simulation times and travelled path. PMID:27688748

  13. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  14. Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields

    NASA Technical Reports Server (NTRS)

    Rediniotis, Othon K.

    1999-01-01

    Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.

  15. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  16. Machine learning algorithms for the prediction of hERG and CYP450 binding in drug development.

    PubMed

    Klon, Anthony E

    2010-07-01

    The cost of developing new drugs is estimated at approximately $1 billion; the withdrawal of a marketed compound due to toxicity can result in serious financial loss for a pharmaceutical company. There has been a greater interest in the development of in silico tools that can identify compounds with metabolic liabilities before they are brought to market. The two largest classes of machine learning (ML) models, which will be discussed in this review, have been developed to predict binding to the human ether-a-go-go related gene (hERG) ion channel protein and the various CYP isoforms. Being able to identify potentially toxic compounds before they are made would greatly reduce the number of compound failures and the costs associated with drug development. This review summarizes the state of modeling hERG and CYP binding towards this goal since 2003 using ML algorithms. A wide variety of ML algorithms that are comparable in their overall performance are available. These ML methods may be applied regularly in discovery projects to flag compounds with potential metabolic liabilities.

  17. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  18. Ternary alloy material prediction using genetic algorithm and cluster expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chong

    2015-12-01

    This thesis summarizes our study on the crystal structures prediction of Fe-V-Si system using genetic algorithm and cluster expansion. Our goal is to explore and look for new stable compounds. We started from the current ten known experimental phases, and calculated formation energies of those compounds using density functional theory (DFT) package, namely, VASP. The convex hull was generated based on the DFT calculations of the experimental known phases. Then we did random search on some metal rich (Fe and V) compositions and found that the lowest energy structures were body centered cube (bcc) underlying lattice, under which we didmore » our computational systematic searches using genetic algorithm and cluster expansion. Among hundreds of the searched compositions, thirteen were selected and DFT formation energies were obtained by VASP. The stability checking of those thirteen compounds was done in reference to the experimental convex hull. We found that the composition, 24-8-16, i.e., Fe 3VSi 2 is a new stable phase and it can be very inspiring to the future experiments.« less

  19. Space shuttle propulsion estimation development verification, volume 1

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.

  20. Detecting Abnormal Machine Characteristics in Cloud Infrastructures

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.

    2011-01-01

    In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.

  1. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, D.; McInnes, L. C.; Woodward, C.

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less

  3. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  4. Investigation of practical applications of H infinity control theory to the design of control systems for large space structures

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis

    1988-01-01

    The applicability of H infinity control theory to the problems of large space structures (LSS) control was investigated. A complete evaluation to any technique as a candidate for large space structure control involves analytical evaluation, algorithmic evaluation, evaluation via simulation studies, and experimental evaluation. The results of analytical and algorithmic evaluations are documented. The analytical evaluation involves the determination of the appropriateness of the underlying assumptions inherent in the H infinity theory, the determination of the capability of the H infinity theory to achieve the design goals likely to be imposed on an LSS control design, and the identification of any LSS specific simplifications or complications of the theory. The resuls of the analytical evaluation are presented in the form of a tutorial on the subject of H infinity control theory with the LSS control designer in mind. The algorthmic evaluation of H infinity for LSS control pertains to the identification of general, high level algorithms for effecting the application of H infinity to LSS control problems, the identification of specific, numerically reliable algorithms necessary for a computer implementation of the general algorithms, the recommendation of a flexible software system for implementing the H infinity design steps, and ultimately the actual development of the necessary computer codes. Finally, the state of the art in H infinity applications is summarized with a brief outline of the most promising areas of current research.

  5. New Features of the Collection 4 MODIS LAI and FPAR Product

    NASA Astrophysics Data System (ADS)

    Bin, T.; Yang, W.; Dong, H.; Shabanov, N.; Knyazikhin, Y.; Myneni, R.

    2003-12-01

    An algorithm based on physics of radiative transfer in vegetation canopies for the retrieval of vegetation green leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) from MODIS surface reflectance data was developed, prototyped and is in operational production at NASA computing facilities since June 2000. This poster highlights recent changes in the operational MODIS LAI and FPAR algorithm introduced for collection 4 data reprocessing. The changes to the algorithm are targeted to improve agreement of retrieved LAI and FPAR with corresponding field measurements, improve consistency of Quality Control (QC) definitions and miscellaneous bug fixes as summarized below. * Improvement of LUTs for the main and back-up algorithms for biomes 1 and 3. Benefits: a) increase in quality of retrievals; b) non-physical peaks in the global LAI distribution have been removed; c) improved agreement with field measurements * Improved QA scheme. Benefits: a) consistency between MODLAND and SCF quality flags has been achieved; b)ambiguity in QA has been resolved * New 8-day compositing scheme. Benefits: a) compositing over best quality retrievals, instead of all retrievals; b) lowers LAI values, decreases saturation and number of pixels generated by the back-up * At-launch static IGBP land cover, input to the LAI/FPAR algorithm, was replaced with the MODIS land cover map. Benefits: a) crosswalking of 17 classes IGBP scheme to 6-biome LC has been eliminated; b) uncertainties in the MODIS LAI/FPAR product due to uncertainties in land cover map have been reduced

  6. Lessons learned and way forward from 6 years of Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2017-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve and qualify algorithms for the retrieval of aerosol information from European sensors. Meanwhile, several validated (multi-) decadal time series of different aerosol parameters from complementary sensors are available: Aerosol Optical Depth (AOD), stratospheric extinction profiles, a qualitative Absorbing Aerosol Index (AAI), fine mode AOD, mineral dust AOD; absorption information and aerosol layer height are in an evaluation phase and the multi-pixel GRASP algorithm for the POLDER instrument is used for selected regions. Validation (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account in an iterative evolution cycle. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. The use of an ensemble method was tested, where several algorithms are applied to the same sensor. The presentation will summarize and discuss the lessons learned from the 6 years of intensive collaboration and highlight major achievements (significantly improved AOD quality, fine mode AOD, dust AOD, pixel level uncertainties, ensemble approach); also limitations and remaining deficits shall be discussed. An outlook will discuss the way forward for the continuous algorithm improvement and re-processing together with opportunities for time series extension with successor instruments of the Sentinel family and the complementarity of the different satellite aerosol products.

  7. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  8. Preprocessing of gene expression data by optimally robust estimators

    PubMed Central

    2010-01-01

    Background The preprocessing of gene expression data obtained from several platforms routinely includes the aggregation of multiple raw signal intensities to one expression value. Examples are the computation of a single expression measure based on the perfect match (PM) and mismatch (MM) probes for the Affymetrix technology, the summarization of bead level values to bead summary values for the Illumina technology or the aggregation of replicated measurements in the case of other technologies including real-time quantitative polymerase chain reaction (RT-qPCR) platforms. The summarization of technical replicates is also performed in other "-omics" disciplines like proteomics or metabolomics. Preprocessing methods like MAS 5.0, Illumina's default summarization method, RMA, or VSN show that the use of robust estimators is widely accepted in gene expression analysis. However, the selection of robust methods seems to be mainly driven by their high breakdown point and not by efficiency. Results We describe how optimally robust radius-minimax (rmx) estimators, i.e. estimators that minimize an asymptotic maximum risk on shrinking neighborhoods about an ideal model, can be used for the aggregation of multiple raw signal intensities to one expression value for Affymetrix and Illumina data. With regard to the Affymetrix data, we have implemented an algorithm which is a variant of MAS 5.0. Using datasets from the literature and Monte-Carlo simulations we provide some reasoning for assuming approximate log-normal distributions of the raw signal intensities by means of the Kolmogorov distance, at least for the discussed datasets, and compare the results of our preprocessing algorithms with the results of Affymetrix's MAS 5.0 and Illumina's default method. The numerical results indicate that when using rmx estimators an accuracy improvement of about 10-20% is obtained compared to Affymetrix's MAS 5.0 and about 1-5% compared to Illumina's default method. The improvement is also visible in the analysis of technical replicates where the reproducibility of the values (in terms of Pearson and Spearman correlation) is increased for all Affymetrix and almost all Illumina examples considered. Our algorithms are implemented in the R package named RobLoxBioC which is publicly available via CRAN, The Comprehensive R Archive Network (http://cran.r-project.org/web/packages/RobLoxBioC/). Conclusions Optimally robust rmx estimators have a high breakdown point and are computationally feasible. They can lead to a considerable gain in efficiency for well-established bioinformatics procedures and thus, can increase the reproducibility and power of subsequent statistical analysis. PMID:21118506

  9. Aerosol retrieval experiments in the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.; de Leeuw, G.; Martynenko, D.; Klüser, L.; Bevan, S.; Davies, W.; Ducos, F.; Deuzé, J. L.; Graigner, R. G.; Heckel, A.; von Hoyningen-Hüne, W.; Kolmonen, P.; Litvinov, P.; North, P.; Poulsen, C. A.; Ramon, D.; Siddans, R.; Sogacheva, L.; Tanre, D.; Thomas, G. E.; Vountas, M.; Descloitres, J.; Griesfeller, J.; Kinne, S.; Schulz, M.; Pinnock, S.

    2013-03-01

    Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010-2013) algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components and their mixing ratios. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data qualitatively by visible analysis of monthly mean AOD maps and quantitatively by comparing global daily gridded satellite data against daily average AERONET sun photometer observations for the different versions of each algorithm. The analysis allowed an assessment of sensitivities of all algorithms which helped define the best algorithm version for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR.

  10. A GNC Perspective of the Launch and Commissioning of NASA's SMAP (Soil Moisture Active Passive) Spacecraft

    NASA Technical Reports Server (NTRS)

    Brown, Todd S.

    2016-01-01

    The NASA Soil Moisture Active Passive (SMAP) spacecraft was designed to use radar and radiometer measurements to produce global soil moisture measurements every 2-3 days. The SMAP spacecraft is a complicated dual-spinning design with a large 6 meter deployable mesh reflector mounted on a platform that spins at 14.6 rpm while the Guidance Navigation and Control algorithms maintain precise nadir pointing for the de-spun portion of the spacecraft. After launching in early 2015, the Guidance Navigation and Control software and hardware aboard the SMAP spacecraft underwent an intensive spacecraft checkout and commissioning period. This paper describes the activities performed by the Guidance Navigation and Control team to confirm the health and phasing of subsystem hardware and the functionality of the guidance and control modes and algorithms. The operations tasks performed, as well as anomalies that were encountered during the commissioning, are explained and results are summarized.

  11. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  12. Back to the Future: Consistency-Based Trajectory Tracking

    NASA Technical Reports Server (NTRS)

    Kurien, James; Nayak, P. Pandurand; Norvig, Peter (Technical Monitor)

    2000-01-01

    Given a model of a physical process and a sequence of commands and observations received over time, the task of an autonomous controller is to determine the likely states of the process and the actions required to move the process to a desired configuration. We introduce a representation and algorithms for incrementally generating approximate belief states for a restricted but relevant class of partially observable Markov decision processes with very large state spaces. The algorithm presented incrementally generates, rather than revises, an approximate belief state at any point by abstracting and summarizing segments of the likely trajectories of the process. This enables applications to efficiently maintain a partial belief state when it remains consistent with observations and revisit past assumptions about the process' evolution when the belief state is ruled out. The system presented has been implemented and results on examples from the domain of spacecraft control are presented.

  13. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    PubMed

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  14. An empirical generative framework for computational modeling of language acquisition.

    PubMed

    Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-06-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.

  15. Self-similarity Clustering Event Detection Based on Triggers Guidance

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfei; Li, Bicheng; Tian, Yuxuan

    Traditional method of Event Detection and Characterization (EDC) regards event detection task as classification problem. It makes words as samples to train classifier, which can lead to positive and negative samples of classifier imbalance. Meanwhile, there is data sparseness problem of this method when the corpus is small. This paper doesn't classify event using word as samples, but cluster event in judging event types. It adopts self-similarity to convergence the value of K in K-means algorithm by the guidance of event triggers, and optimizes clustering algorithm. Then, combining with named entity and its comparative position information, the new method further make sure the pinpoint type of event. The new method avoids depending on template of event in tradition methods, and its result of event detection can well be used in automatic text summarization, text retrieval, and topic detection and tracking.

  16. Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.

    This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less

  17. DiffNet: automatic differential functional summarization of dE-MAP networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes

    2014-10-01

    The study of genetic interaction networks that respond to changing conditions is an emerging research problem. Recently, Bandyopadhyay et al. (2010) proposed a technique to construct a differential network (dE-MAPnetwork) from two static gene interaction networks in order to map the interaction differences between them under environment or condition change (e.g., DNA-damaging agent). This differential network is then manually analyzed to conclude that DNA repair is differentially effected by the condition change. Unfortunately, manual construction of differential functional summary from a dE-MAP network that summarizes all pertinent functional responses is time-consuming, laborious and error-prone, impeding large-scale analysis on it. To this end, we propose DiffNet, a novel data-driven algorithm that leverages Gene Ontology (go) annotations to automatically summarize a dE-MAP network to obtain a high-level map of functional responses due to condition change. We tested DiffNet on the dynamic interaction networks following MMS treatment and demonstrated the superiority of our approach in generating differential functional summaries compared to state-of-the-art graph clustering methods. We studied the effects of parameters in DiffNet in controlling the quality of the summary. We also performed a case study that illustrates its utility. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Adaptive Greedy Dictionary Selection for Web Media Summarization.

    PubMed

    Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo

    2017-01-01

    Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.

  19. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    NASA Technical Reports Server (NTRS)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  1. Automatic Fringe Detection for Oil Film Interferometry Measurement of Skin Friction

    NASA Technical Reports Server (NTRS)

    Naughton, Jonathan W.; Decker, Robert K.; Jafari, Farhad

    2001-01-01

    This report summarizes two years of work on investigating algorithms for automatically detecting fringe patterns in images acquired using oil-drop interferometry for the determination of skin friction. Several different analysis methods were tested, and a combination of a windowed Fourier transform followed by a correlation was found to be most effective. The implementation of this method is discussed and details of the process are described. The results indicate that this method shows promise for automating the fringe detection process, but further testing is required.

  2. Underwater acoustic wireless sensor networks: advances and future trends in physical, MAC and routing layers.

    PubMed

    Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose

    2014-01-06

    This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols.

  3. Careflow Mining Techniques to Explore Type 2 Diabetes Evolution.

    PubMed

    Dagliati, Arianna; Tibollo, Valentina; Cogni, Giulia; Chiovato, Luca; Bellazzi, Riccardo; Sacchi, Lucia

    2018-03-01

    In this work we describe the application of a careflow mining algorithm to detect the most frequent patterns of care in a type 2 diabetes patients cohort. The applied method enriches the detected patterns with clinical data to define temporal phenotypes across the studied population. Novel phenotypes are discovered from heterogeneous data of 424 Italian patients, and compared in terms of metabolic control and complications. Results show that careflow mining can help to summarize the complex evolution of the disease into meaningful patterns, which are also significant from a clinical point of view.

  4. MiniBooNE Neutrino Physics at the University of Alabama

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stancu, Ion

    2007-04-27

    This report summarizes the activities conducted by the UA group under the auspices of the DoE/EPSCoR grant number DE--FG02--04ER46112 since the date of the previous progress report, i.e., since November 2005. It also provides a final report of the accomplishments achieved during the entire period of this grant (February 2004 to January 2007). The grant has fully supported the work of Dr. Yong Liu (postdoctoral research assistant -- in residence at Fermilab) on the MiniBooNE reconstruction and particle identification (PID) algorithms.

  5. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  6. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  7. Russian guidelines for the management of COPD: algorithm of pharmacologic treatment

    PubMed Central

    Aisanov, Zaurbek; Avdeev, Sergey; Arkhipov, Vladimir; Belevskiy, Andrey; Chuchalin, Alexander; Leshchenko, Igor; Ovcharenko, Svetlana; Shmelev, Evgeny; Miravitlles, Marc

    2018-01-01

    The high prevalence of COPD together with its high level of misdiagnosis and late diagnosis dictate the necessity for the development and implementation of clinical practice guidelines (CPGs) in order to improve the management of this disease. High-quality, evidence-based international CPGs need to be adapted to the particular situation of each country or region. A new version of the Russian Respiratory Society guidelines released at the end of 2016 was based on the proposal by Global Initiative for Obstructive Lung Disease but adapted to the characteristics of the Russian health system and included an algorithm of pharmacologic treatment of COPD. The proposed algorithm had to comply with the requirements of the Russian Ministry of Health to be included into the unified electronic rubricator, which required a balance between the level of information and the simplicity of the graphic design. This was achieved by: exclusion of the initial diagnostic process, grouping together the common pharmacologic and nonpharmacologic measures for all patients, and the decision not to use the letters A–D for simplicity and clarity. At all stages of the treatment algorithm, efficacy and safety have to be carefully assessed. Escalation and de-escalation is possible in the case of lack of or insufficient efficacy or safety issues. Bronchodilators should not be discontinued except in the case of significant side effects. At the same time, inhaled corticosteroid (ICS) withdrawal is not represented in the algorithm, because it was agreed that there is insufficient evidence to establish clear criteria for ICSs discontinuation. Finally, based on the Global Initiative for Obstructive Lung Disease statement, the proposed algorithm reflects and summarizes different approaches to the pharmacological treatment of COPD taking into account the reality of health care in the Russian Federation. PMID:29386887

  8. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  9. Russian guidelines for the management of COPD: algorithm of pharmacologic treatment.

    PubMed

    Aisanov, Zaurbek; Avdeev, Sergey; Arkhipov, Vladimir; Belevskiy, Andrey; Chuchalin, Alexander; Leshchenko, Igor; Ovcharenko, Svetlana; Shmelev, Evgeny; Miravitlles, Marc

    2018-01-01

    The high prevalence of COPD together with its high level of misdiagnosis and late diagnosis dictate the necessity for the development and implementation of clinical practice guidelines (CPGs) in order to improve the management of this disease. High-quality, evidence-based international CPGs need to be adapted to the particular situation of each country or region. A new version of the Russian Respiratory Society guidelines released at the end of 2016 was based on the proposal by Global Initiative for Obstructive Lung Disease but adapted to the characteristics of the Russian health system and included an algorithm of pharmacologic treatment of COPD. The proposed algorithm had to comply with the requirements of the Russian Ministry of Health to be included into the unified electronic rubricator, which required a balance between the level of information and the simplicity of the graphic design. This was achieved by: exclusion of the initial diagnostic process, grouping together the common pharmacologic and nonpharmacologic measures for all patients, and the decision not to use the letters A-D for simplicity and clarity. At all stages of the treatment algorithm, efficacy and safety have to be carefully assessed. Escalation and de-escalation is possible in the case of lack of or insufficient efficacy or safety issues. Bronchodilators should not be discontinued except in the case of significant side effects. At the same time, inhaled corticosteroid (ICS) withdrawal is not represented in the algorithm, because it was agreed that there is insufficient evidence to establish clear criteria for ICSs discontinuation. Finally, based on the Global Initiative for Obstructive Lung Disease statement, the proposed algorithm reflects and summarizes different approaches to the pharmacological treatment of COPD taking into account the reality of health care in the Russian Federation.

  10. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  11. Application of the Trend Filtering Algorithm for Photometric Time Series Data

    NASA Astrophysics Data System (ADS)

    Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.

    2016-08-01

    Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.

  12. Strength in Numbers: Using Big Data to Simplify Sentiment Classification.

    PubMed

    Filippas, Apostolos; Lappas, Theodoros

    2017-09-01

    Sentiment classification, the task of assigning a positive or negative label to a text segment, is a key component of mainstream applications such as reputation monitoring, sentiment summarization, and item recommendation. Even though the performance of sentiment classification methods has steadily improved over time, their ever-increasing complexity renders them comprehensible by only a shrinking minority of expert practitioners. For all others, such highly complex methods are black-box predictors that are hard to tune and even harder to justify to decision makers. Motivated by these shortcomings, we introduce BigCounter: a new algorithm for sentiment classification that substitutes algorithmic complexity with Big Data. Our algorithm combines standard data structures with statistical testing to deliver accurate and interpretable predictions. It is also parameter free and suitable for use virtually "out of the box," which makes it appealing for organizations wanting to leverage their troves of unstructured data without incurring the significant expense of creating in-house teams of data scientists. Finally, BigCounter's efficient and parallelizable design makes it applicable to very large data sets. We apply our method on such data sets toward a study on the limits of Big Data for sentiment classification. Our study finds that, after a certain point, predictive performance tends to converge and additional data have little benefit. Our algorithmic design and findings provide the foundations for future research on the data-over-computation paradigm for classification problems.

  13. Study on Interference Suppression Algorithms for Electronic Noses: A Review

    PubMed Central

    Liang, Zhifang; Zhang, Ci; Sun, Hao; Liu, Tao

    2018-01-01

    Electronic noses (e-nose) are composed of an appropriate pattern recognition system and a gas sensor array with a certain degree of specificity and broad spectrum characteristics. The gas sensors have their own shortcomings of being highly sensitive to interferences which has an impact on the detection of target gases. When there are interferences, the performance of the e-nose will deteriorate. Therefore, it is urgent to study interference suppression techniques for e-noses. This paper summarizes the sources of interferences and reviews the advances made in recent years in interference suppression for e-noses. According to the factors which cause interference, interferences can be classified into two types: interference caused by changes of operating conditions and interference caused by hardware failures. The existing suppression methods were summarized and analyzed from these two aspects. Since the interferences of e-noses are uncertain and unstable, it can be found that some nonlinear methods have good effects for interference suppression, such as methods based on transfer learning, adaptive methods, etc. PMID:29649152

  14. A Graph Summarization Algorithm Based on RFID Logistics

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Hu, Kongfa; Lu, Zhipeng; Zhao, Li; Chen, Ling

    Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. The volume of data generated by a typical RFID application will be enormous as each item will generate a complete history of all the individual locations that it occupied at every point in time. The movement trails of such RFID data form gigantic commodity flowgraph representing the locations and durations of the path stages traversed by each item. In this paper, we use graph to construct a warehouse of RFID commodity flows, and introduce a database-style operation to summarize graphs, which produces a summary graph by grouping nodes based on user-selected node attributes, further allows users to control the hierarchy of summaries. It can cut down the size of graphs, and provide convenience for users to study just on the shrunk graph which they interested. Through extensive experiments, we demonstrate the effectiveness and efficiency of the proposed method.

  15. Ontology-based structured cosine similarity in document summarization: with applications to mobile audio-based knowledge management.

    PubMed

    Yuan, Soe-Tsyr; Sun, Jerry

    2005-10-01

    Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.

  16. SlideSort: all pairs similarity search for short reads

    PubMed Central

    Shimizu, Kana; Tsuda, Koji

    2011-01-01

    Motivation: Recent progress in DNA sequencing technologies calls for fast and accurate algorithms that can evaluate sequence similarity for a huge amount of short reads. Searching similar pairs from a string pool is a fundamental process of de novo genome assembly, genome-wide alignment and other important analyses. Results: In this study, we designed and implemented an exact algorithm SlideSort that finds all similar pairs from a string pool in terms of edit distance. Using an efficient pattern growth algorithm, SlideSort discovers chains of common k-mers to narrow down the search. Compared to existing methods based on single k-mers, our method is more effective in reducing the number of edit distance calculations. In comparison to backtracking methods such as BWA, our method is much faster in finding remote matches, scaling easily to tens of millions of sequences. Our software has an additional function of single link clustering, which is useful in summarizing short reads for further processing. Availability: Executable binary files and C++ libraries are available at http://www.cbrc.jp/~shimizu/slidesort/ for Linux and Windows. Contact: slidesort@m.aist.go.jp; shimizu-kana@aist.go.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21148542

  17. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  18. Atmospheric infrared sounder

    NASA Technical Reports Server (NTRS)

    Rosenkranz, Philip, W.; Staelin, David, H.

    1995-01-01

    This report summarizes the activities of two Atmospheric Infrared Sounder (AIRS) team members during the first half of 1995. Changes to the microwave first-guess algorithm have separated processing of Advanced Microwave Sounding Unit A (AMSU-A) from AMSU-B data so that the different spatial resolutions of the two instruments may eventually be considered. Two-layer cloud simulation data was processed with this algorithm. The retrieved water vapor column densities and liquid water are compared. The information content of AIRS data was applied to AMSU temperature profile retrievals in clear and cloudy atmospheres. The significance of this study for AIRS/AMSU processing lies in the improvement attributable to spatial averaging and in the good results obtained with a very simple algorithm when all of the channels are used. Uncertainty about the availability of either a Microwave Humidity Sensor (MHS) or AMSU-B for EOS has motivated consideration of possible low-cost alternative designs for a microwave humidity sensor. One possible configuration would have two local oscillators (compared to three for MHS) at 118.75 and 183.31 GHz. Retrieval performances of the two instruments were compared in a memorandum titled 'Comparative Analysis of Alternative MHS Configurations', which is attached.

  19. A testbed for architecture and fidelity trade studies in the Bayesian decision-level fusion of ATR products

    NASA Astrophysics Data System (ADS)

    Erickson, Kyle J.; Ross, Timothy D.

    2007-04-01

    Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth technique bolstered by a strong theoretical foundation that requires no modification of the source algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations across these many options, by feeding an actual fusion algorithm via models of the system environment. Models and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute performance metrics such as probability of correct identification. Performance differences between the best of the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target, prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.

  20. Prototype to measure bracket debonding force in vivo

    PubMed Central

    Tonus, Jéssika Lagni; Manfroi, Fernanda Borguetti; Borges, Gilberto Antonio; Grigolo, Eduardo Correa; Helegda, Sérgio; Spohr, Ana Maria

    2017-01-01

    ABSTRACT Introduction: Material biodegradation that occurs in the mouth may interfere in the bonding strength between the bracket and the enamel, causing lower bond strength values in vivo, in comparison with in vitro studies. Objective: To develop a prototype to measure bracket debonding force in vivo and to evaluate, in vitro, the bond strength obtained with the prototype. Methods: A original plier (3M Unitek) was modified by adding one strain gauge directly connected to its claw. An electronic circuit performed the reading of the strain gauge, and the software installed in a computer recorded the values of the bracket debonding force, in kgf. Orthodontic brackets were bonded to the facial surface of 30 bovine incisors with adhesive materials. In Group 1 (n = 15), debonding was carried out with the prototype, while tensile bond strength testing was performed in Group 2 (n = 15). A universal testing machine was used for the second group. The adhesive remnant index (ARI) was recorded. Results: According to Student’s t test (α = 0.05), Group 1 (2.96 MPa) and Group 2 (3.08 MPa) were not significantly different. ARI score of 3 was predominant in the two groups. Conclusion: The prototype proved to be reliable for obtaining in vivo bond strength values for orthodontic brackets. PMID:28444011

  1. Rover Wheel-Actuated Tool Interface

    NASA Technical Reports Server (NTRS)

    Matthews, Janet; Ahmad, Norman; Wilcox, Brian

    2007-01-01

    A report describes an interface for utilizing some of the mobility features of a mobile robot for general-purpose manipulation of tools and other objects. The robot in question, now undergoing conceptual development for use on the Moon, is the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) rover, which is designed to roll over gentle terrain or walk over rough or steep terrain. Each leg of the robot is a six-degree-of-freedom general purpose manipulator tipped by a wheel with a motor drive. The tool interface includes a square cross-section peg, equivalent to a conventional socket-wrench drive, that rotates with the wheel. The tool interface also includes a clamp that holds a tool on the peg, and a pair of fold-out cameras that provides close-up stereoscopic images of the tool and its vicinity. The field of view of the imagers is actuated by the clamp mechanism and is specific to each tool. The motor drive can power any of a variety of tools, including rotating tools for helical fasteners, drills, and such clamping tools as pliers. With the addition of a flexible coupling, it could also power another tool or remote manipulator at a short distance. The socket drive can provide very high torque and power because it is driven by the wheel motor.

  2. Experimental Study on Strength Evaluation Applied for Teeth Extraction: An In Vivo Study

    PubMed Central

    Cicciù, Marco; Bramanti, Ennio; Signorino, Fabrizio; Cicciù, Alessandra; Sortino, Francesco

    2013-01-01

    Purpose: The aim of this work was to analyse all the applied movements when extracting healthy upper and lower jaw premolars for orthodontic purposes. The authors wanted to demonstrate that the different bone densities of the mandible and maxilla are not a significant parameter when related to the extraction force applied. The buccal and palatal rocking movements, plus the twisting movements were also measured in this in-vivo study during premolar extraction for orthodontic purposes. Methods: The physical strains or forces transferred onto the teeth during extraction are the following three movements: gripping, twisting, and traction. A strain measurement gauge was attached onto an ordinary dentistry plier. The strain measurement gauge was constituted with an extensimetric washer with three 45º grids. The system operation was correlated to the variation of electrical resistance. Results: The variations of resistance (∆R) and all the different forces applied to the teeth (∆V) were recorded by a computerized system. Data results were processed through Microsoft Excel. The results underlined the stress distribution on the extracted teeth during gripping, twisting and flexion. Conclusions: The obtained data showed that the strength required to effect teeth extraction is not influenced by the quality of the bone but is instead influenced by the shape of the tooth’s root. PMID:23539609

  3. Algorithm Updates for the Fourth SeaWiFS Data Reprocessing

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.

    2003-01-01

    The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes described in Chapter 4. Finally, Chapter 10 describes a comparison of results from the third and fourth reprocessings along the US. Northeast coast.

  4. Underwater Acoustic Wireless Sensor Networks: Advances and Future Trends in Physical, MAC and Routing Layers

    PubMed Central

    Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose

    2014-01-01

    This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols. PMID:24399155

  5. Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation

    NASA Technical Reports Server (NTRS)

    Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred

    2008-01-01

    Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.

  6. Cloud cover determination in polar regions from satellite imagery

    NASA Technical Reports Server (NTRS)

    Barry, R. G.; Key, J. R.; Maslanik, J. A.

    1988-01-01

    The principal objectives of this project are: to develop suitable validation data sets to evaluate the effectiveness of the ISCCP operational algorithm for cloud retrieval in polar regions and to validate model simulations of polar cloud cover; to identify limitations of current procedures for varying atmospheric surface conditions, and to explore potential means to remedy them using textural classifiers: and to compare synoptic cloud data from a control run experiment of the Goddard Institute for Space Studies (GISS) climate model 2 with typical observed synoptic cloud patterns. Current investigations underway are listed and the progress made to date is summarized.

  7. Fast algorithm for radio propagation modeling in realistic 3-D urban environment

    NASA Astrophysics Data System (ADS)

    Rauch, A.; Lianghai, J.; Klein, A.; Schotten, H. D.

    2015-11-01

    Next generation wireless communication systems will consist of a large number of mobile or static terminals and should be able to fulfill multiple requirements depending on the current situation. Low latency and high packet success transmission rates should be mentioned in this context and can be summarized as ultra-reliable communications (URC). Especially for domains like mobile gaming, mobile video services but also for security relevant scenarios like traffic safety, traffic control systems and emergency management URC will be more and more required to guarantee a working communication between the terminals all the time.

  8. Research in progress and other activities of the Institute for Computer Applications in Science and Engineering

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics and computer science during the period April 1, 1993 through September 30, 1993. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustic and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  9. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  10. Laboratory Diagnosis of Infective Endocarditis

    PubMed Central

    Liesman, Rachael M.; Pritt, Bobbi S.; Maleszewski, Joseph J.

    2017-01-01

    ABSTRACT Infective endocarditis is life-threatening; identification of the underlying etiology informs optimized individual patient management. Changing epidemiology, advances in blood culture techniques, and new diagnostics guide the application of laboratory testing for diagnosis of endocarditis. Blood cultures remain the standard test for microbial diagnosis, with directed serological testing (i.e., Q fever serology, Bartonella serology) in culture-negative cases. Histopathology and molecular diagnostics (e.g., 16S rRNA gene PCR/sequencing, Tropheryma whipplei PCR) may be applied to resected valves to aid in diagnosis. Herein, we summarize recent knowledge in this area and propose a microbiologic and pathological algorithm for endocarditis diagnosis. PMID:28659319

  11. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  12. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  13. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abarbanel, Henry; Gill, Philip

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  14. Analytical procedures for estimating structural response to acoustic fields generated by advanced launch systems, phase 2

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Lin, Y. K.; Zhu, Li-Ping; Fang, Jian-Jie; Cai, G. Q.

    1994-01-01

    This report supplements a previous report of the same title submitted in June, 1992. It summarizes additional analytical techniques which have been developed for predicting the response of linear and nonlinear structures to noise excitations generated by large propulsion power plants. The report is divided into nine chapters. The first two deal with incomplete knowledge of boundary conditions of engineering structures. The incomplete knowledge is characterized by a convex set, and its diagnosis is formulated as a multi-hypothesis discrete decision-making algorithm with attendant criteria of adaptive termination.

  15. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  16. Towards automated visual flexible endoscope navigation.

    PubMed

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  17. An overview of methods to mitigate artifacts in optical coherence tomography imaging of the skin.

    PubMed

    Adabi, Saba; Fotouhi, Audrey; Xu, Qiuyun; Daveluy, Steve; Mehregan, Darius; Podoleanu, Adrian; Nasiriavanaki, Mohammadreza

    2018-05-01

    Optical coherence tomography (OCT) of skin delivers three-dimensional images of tissue microstructures. Although OCT imaging offers a promising high-resolution modality, OCT images suffer from some artifacts that lead to misinterpretation of tissue structures. Therefore, an overview of methods to mitigate artifacts in OCT imaging of the skin is of paramount importance. Speckle, intensity decay, and blurring are three major artifacts in OCT images. Speckle is due to the low coherent light source used in the configuration of OCT. Intensity decay is a deterioration of light with respect to depth, and blurring is the consequence of deficiencies of optical components. Two speckle reduction methods (one based on artificial neural network and one based on spatial compounding), an attenuation compensation algorithm (based on Beer-Lambert law) and a deblurring procedure (using deconvolution), are described. Moreover, optical properties extraction algorithm based on extended Huygens-Fresnel (EHF) principle to obtain some additional information from OCT images are discussed. In this short overview, we summarize some of the image enhancement algorithms for OCT images which address the abovementioned artifacts. The results showed a significant improvement in the visibility of the clinically relevant features in the images. The quality improvement was evaluated using several numerical assessment measures. Clinical dermatologists benefit from using these image enhancement algorithms to improve OCT diagnosis and essentially function as a noninvasive optical biopsy. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Spectral analysis of GEOS-3 altimeter data and frequency domain collocation. [to estimate gravity anomalies

    NASA Technical Reports Server (NTRS)

    Eren, K.

    1980-01-01

    The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.

  19. Cloud Computing for Mission Design and Operations

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios

    2012-01-01

    The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.

  20. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA.

    PubMed

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.

  1. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    PubMed

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    PubMed Central

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  3. Layout Design of Human-Machine Interaction Interface of Cabin Based on Cognitive Ergonomics and GA-ACA

    PubMed Central

    Deng, Li; Wang, Guohua; Yu, Suihuai

    2016-01-01

    In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745

  4. Eigensystem realization algorithm modal identification experiences with mini-mast

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Schenk, Axel; Noll, Christopher

    1992-01-01

    This paper summarizes work performed under a collaborative research effort between the National Aeronautics and Space Administration (NASA) and the German Aerospace Research Establishment (DLR, Deutsche Forschungsanstalt fur Luft- und Raumfahrt). The objective is to develop and demonstrate system identification technology for future large space structures. Recent experiences using the Eigensystem Realization Algorithm (ERA), for modal identification of Mini-Mast, are reported. Mini-Mast is a 20 m long deployable space truss used for structural dynamics and active vibration-control research at the Langley Research Center. A comprehensive analysis of 306 frequency response functions (3 excitation forces and 102 displacement responses) was performed. Emphasis is placed on two topics of current research: (1) gaining an improved understanding of ERA performance characteristics (theory vs. practice); and (2) developing reliable techniques to improve identification results for complex experimental data. Because of nonlinearities and numerous local modes, modal identification of Mini-Mast proved to be surprisingly difficult. Methods were available, ERA, for obtaining detailed, high-confidence results.

  5. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  6. Towards topological quantum computer

    NASA Astrophysics Data System (ADS)

    Melnikov, D.; Mironov, A.; Mironov, S.; Morozov, A.; Morozov, An.

    2018-01-01

    Quantum R-matrices, the entangling deformations of non-entangling (classical) permutations, provide a distinguished basis in the space of unitary evolutions and, consequently, a natural choice for a minimal set of basic operations (universal gates) for quantum computation. Yet they play a special role in group theory, integrable systems and modern theory of non-perturbative calculations in quantum field and string theory. Despite recent developments in those fields the idea of topological quantum computing and use of R-matrices, in particular, practically reduce to reinterpretation of standard sets of quantum gates, and subsequently algorithms, in terms of available topological ones. In this paper we summarize a modern view on quantum R-matrix calculus and propose to look at the R-matrices acting in the space of irreducible representations, which are unitary for the real-valued couplings in Chern-Simons theory, as the fundamental set of universal gates for topological quantum computer. Such an approach calls for a more thorough investigation of the relation between topological invariants of knots and quantum algorithms.

  7. An overview of GOES-8 diurnal fire and smoke results for SCAR-B and 1995 fire season in South America

    NASA Astrophysics Data System (ADS)

    Prins, Elaine M.; Feltz, Joleen M.; Menzel, W. Paul; Ward, Darold E.

    1998-12-01

    The launch of the eighth Geostationary Operational Environmental Satellite (GOES-8) in 1994 introduced an improved capability for diurnal fire and smoke monitoring throughout the western hemisphere. In South America the GOES-8 automated biomass burning algorithm (ABBA) and the automated smoke/aerosol detection algorithm (ASADA) are being used to monitor biomass burning. This paper outlines GOES-8 ABBA and ASADA development activities and summarizes results for the Smoke, Clouds, and Radiation in Brazil (SCAR-B) experiment and the 1995 fire season. GOES-8 ABBA results document the diurnal, spatial, and seasonal variability in fire activity throughout South America. A validation exercise compares GOES-8 ABBA results with ground truth measurements for two SCAR-B prescribed burns. GOES-8 ASADA aerosol coverage and derived albedo results provide an overview of the extent of daily and seasonal smoke coverage and relative intensities. Day-to-day variability in smoke extent closely tracks fluctuations in fire activity.

  8. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grossman, Max; Pritchard Jr., Howard Porter; Budimlic, Zoran

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to testmore » against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.« less

  9. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Volume 2, Part 2: Appendixes B, C, D and E

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    The derivation of the equations is presented, the rate control algorithm described, and simulation methodologies summarized. A set of dynamics equations that can be used recursively to calculate forces and torques acting at the joints of an n link manipulator given the manipulator joint rates are derived. The equations are valid for any n link manipulator system with any kind of joints connected in any sequence. The equations of motion for the class of manipulators consisting of n rigid links interconnected by rotary joints are derived. A technique is outlined for reducing the system of equations to eliminate contraint torques. The linearized dynamics equations for an n link manipulator system are derived. The general n link linearized equations are then applied to a two link configuration. The coordinated rate control algorithm used to compute individual joint rates when given end effector rates is described. A short discussion of simulation methodologies is presented.

  10. A microprocessor application to a strapdown laser gyro navigator

    NASA Technical Reports Server (NTRS)

    Giardina, C.; Luxford, E.

    1980-01-01

    The replacement of analog circuit control loops for laser gyros (path length control, cross axis temperature compensation loops, dither servo and current regulators) with digital filters residing in microcomputers is addressed. In addition to the control loops, a discussion is given on applying the microprocessor hardware to compensation for coning and skulling motion where simple algorithms are processed at high speeds to compensate component output data (digital pulses) for linear and angular vibration motions. Highlights are given on the methodology and system approaches used in replacing differential equations describing the analog system in terms of the mechanized difference equations of the microprocessor. Standard one for one frequency domain techniques are employed in replacing analog transfer functions by their transform counterparts. Direct digital design techniques are also discussed along with their associated benefits. Time and memory loading analyses are also summarized, as well as signal and microprocessor architecture. Trade offs in algorithm, mechanization, time/memory loading, accuracy, and microprocessor architecture are also given.

  11. Using Ground-Based Measurements and Retrievals to Validate Satellite Data

    NASA Technical Reports Server (NTRS)

    Dong, Xiquan

    2002-01-01

    The proposed research is to use the DOE ARM ground-based measurements and retrievals as the ground-truth references for validating satellite cloud results and retrieving algorithms. This validation effort includes four different ways: (1) cloud properties on different satellites, therefore different sensors, TRMM VIRS and TERRA MODIS; (2) cloud properties at different climatic regions, such as DOE ARM SGP, NSA, and TWP sites; (3) different cloud types, low and high level cloud properties; and (4) day and night retrieving algorithms. Validation of satellite-retrieved cloud properties is very difficult and a long-term effort because of significant spatial and temporal differences between the surface and satellite observing platforms. The ground-based measurements and retrievals, only carefully analyzed and validated, can provide a baseline for estimating errors in the satellite products. Even though the validation effort is so difficult, a significant progress has been made during the proposed study period, and the major accomplishments are summarized in the follow.

  12. Advances in time-of-flight PET

    PubMed Central

    Surti, Suleman; Karp, Joel S.

    2016-01-01

    This paper provides a review and an update on time-of-flight PET imaging with a focus on PET instrumentation, ranging from hardware design to software algorithms. We first present a short introduction to PET, followed by a description of TOF PET imaging and its history from the early days. Next, we introduce the current state-of-art in TOF PET technology and briefly summarize the benefits of TOF PET imaging. This is followed by a discussion of the various technological advancements in hardware (scintillators, photo-sensors, electronics) and software (image reconstruction) that have led to the current widespread use of TOF PET technology, and future developments that have the potential for further improvements in the TOF imaging performance. We conclude with a discussion of some new research areas that have opened up in PET imaging as a result of having good system timing resolution, ranging from new algorithms for attenuation correction, through efficient system calibration techniques, to potential for new PET system designs. PMID:26778577

  13. Robust Planning for Autonomous Navigation of Mobile Robots in Unstructured, Dynamic Environments: An LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EISLER, G. RICHARD

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Robust Planning for Autonomous Navigation of Mobile Robots In Unstructured, Dynamic Environments (AutoNav)''. The project goal was to develop an algorithmic-driven, multi-spectral approach to point-to-point navigation characterized by: segmented on-board trajectory planning, self-contained operation without human support for mission duration, and the development of appropriate sensors and algorithms to navigate unattended. The project was partially successful in achieving gains in sensing, path planning, navigation, and guidance. One of three experimental platforms, the Minimalist Autonomous Testbed, used a repetitive sense-and-re-plan combination to demonstratemore » the majority of elements necessary for autonomous navigation. However, a critical goal for overall success in arbitrary terrain, that of developing a sensor that is able to distinguish true obstacles that need to be avoided as a function of vehicle scale, still needs substantial research to bring to fruition.« less

  14. Pre-Launch Evaluation of the NPP VIIRS Land and Cryosphere EDRs to Meet NASA's Science Requirements

    NASA Technical Reports Server (NTRS)

    Roman, Miguel O.; Justice, Chris; Csiszar, Ivan; Key, Jeffrey R.; Devadiga, Sadashiva; Davidson, carol; Wolfe, Robert; Privette, Jeff

    2011-01-01

    This paper summarizes the NASA Visible Infrared Imaging Radiometer Suite (VIIRS) Land Science team's findings to date with respect to the utility of the VIIRS Land and Cryosphere EDRs to meet NASA's science requirements. Based on previous assessments and results from a recent 51-day global test performed by the Land Product Evaluation and Analysis Tool Element (Land PEATE), the NASA VIIRS Land Science team has determined that, if all the Land and Cryosphere EDRs are to serve the needs of the science community, a number of changes to several products and the Interface Data Processing Segment (IDPS) algorithm processing chain will be needed. In addition, other products will also need to be added to the VIIRS Land product suite to provide continuity for all of the MODIS land data record. As the NASA research program explores new global change research areas, the VIIRS instrument should also provide the polar-orbiting imager data from which new algorithms could be developed, produced, and validated.

  15. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  16. A hierarchical network-based algorithm for multi-scale watershed delineation

    NASA Astrophysics Data System (ADS)

    Castronova, Anthony M.; Goodall, Jonathan L.

    2014-11-01

    Watershed delineation is a process for defining a land area that contributes surface water flow to a single outlet point. It is a commonly used in water resources analysis to define the domain in which hydrologic process calculations are applied. There has been a growing effort over the past decade to improve surface elevation measurements in the U.S., which has had a significant impact on the accuracy of hydrologic calculations. Traditional watershed processing on these elevation rasters, however, becomes more burdensome as data resolution increases. As a result, processing of these datasets can be troublesome on standard desktop computers. This challenge has resulted in numerous works that aim to provide high performance computing solutions to large data, high resolution data, or both. This work proposes an efficient watershed delineation algorithm for use in desktop computing environments that leverages existing data, U.S. Geological Survey (USGS) National Hydrography Dataset Plus (NHD+), and open source software tools to construct watershed boundaries. This approach makes use of U.S. national-level hydrography data that has been precomputed using raster processing algorithms coupled with quality control routines. Our approach uses carefully arranged data and mathematical graph theory to traverse river networks and identify catchment boundaries. We demonstrate this new watershed delineation technique, compare its accuracy with traditional algorithms that derive watershed solely from digital elevation models, and then extend our approach to address subwatershed delineation. Our findings suggest that the open-source hierarchical network-based delineation procedure presented in the work is a promising approach to watershed delineation that can be used summarize publicly available datasets for hydrologic model input pre-processing. Through our analysis, we explore the benefits of reusing the NHD+ datasets for watershed delineation, and find that the our technique offers greater flexibility and extendability than traditional raster algorithms.

  17. Querying Co-regulated Genes on Diverse Gene Expression Datasets Via Biclustering.

    PubMed

    Deveci, Mehmet; Küçüktunç, Onur; Eren, Kemal; Bozdağ, Doruk; Kaya, Kamer; Çatalyürek, Ümit V

    2016-01-01

    Rapid development and increasing popularity of gene expression microarrays have resulted in a number of studies on the discovery of co-regulated genes. One important way of discovering such co-regulations is the query-based search since gene co-expressions may indicate a shared role in a biological process. Although there exist promising query-driven search methods adapting clustering, they fail to capture many genes that function in the same biological pathway because microarray datasets are fraught with spurious samples or samples of diverse origin, or the pathways might be regulated under only a subset of samples. On the other hand, a class of clustering algorithms known as biclustering algorithms which simultaneously cluster both the items and their features are useful while analyzing gene expression data, or any data in which items are related in only a subset of their samples. This means that genes need not be related in all samples to be clustered together. Because many genes only interact under specific circumstances, biclustering may recover the relationships that traditional clustering algorithms can easily miss. In this chapter, we briefly summarize the literature using biclustering for querying co-regulated genes. Then we present a novel biclustering approach and evaluate its performance by a thorough experimental analysis.

  18. Statistically significant relational data mining :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less

  19. Final Progress Report: Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawool-Sullivan, Mohini; Bounds, John Alan; Brumby, Steven P.

    2012-04-30

    This is the final report of the project titled, 'Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes,' PMIS project number LA10-HUMANID-PD03. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). It summarizes work performed over the FY10 time period. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). Human analysts begin analyzing a spectrum based on features in the spectrum - lines and shapes that aremore » present in a given spectrum. The proposed work was to carry out a feasibility study that will pick out all gamma ray peaks and other features such as Compton edges, bremsstrahlung, presence/absence of shielding and presence of neutrons and escape peaks. Ultimately success of this feasibility study will allow us to collectively explain identified features and form a realistic scenario that produced a given spectrum in the future. We wanted to develop and demonstrate machine learning algorithms that will qualitatively enhance the automated identification capabilities of portable radiological sensors that are currently being used in the field.« less

  20. Clustervision: Visual Supervision of Unsupervised Clustering.

    PubMed

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  1. WFC3/UVIS Dark Calibration: Monitoring Results and Improvements to Dark Reference Files

    NASA Astrophysics Data System (ADS)

    Bourque, M.; Baggett, S.

    2016-04-01

    The Wide Field Camera 3 (WFC3) UVIS detector possesses an intrinsic signal during exposures, even in the absence of light, known as dark current. A daily monitor program is employed every HST cycle to characterize and measure this current as well as to create calibration files which serve to subtract the dark current from science data. We summarize the results of the daily monitor program for all on-orbit data. We also introduce a new algorithm for generating the dark reference files that provides several improvements to their overall quality. Key features to the new algorithm include correcting the dark frames for Charge Transfer Efficiency (CTE) losses, using an anneal-cycle average value to measure the dark current, and generating reference files on a daily basis. This new algorithm is part of the release of the CALWF3 v3.3 calibration pipeline on February 23, 2016 (also known as "UVIS 2.0"). Improved dark reference files have been regenerated and re-delivered to the Calibration Reference Data System (CRDS) for all on-orbit data. Observers with science data taken prior to the release of CALWF3 v3.3 may request their data through the Mikulski Archive for Space Telescopes (MAST) to obtain the improved products.

  2. Aerosol retrieval experiments in the ESA Aerosol_cci project

    NASA Astrophysics Data System (ADS)

    Holzer-Popp, T.; de Leeuw, G.; Griesfeller, J.; Martynenko, D.; Klüser, L.; Bevan, S.; Davies, W.; Ducos, F.; Deuzé, J. L.; Graigner, R. G.; Heckel, A.; von Hoyningen-Hüne, W.; Kolmonen, P.; Litvinov, P.; North, P.; Poulsen, C. A.; Ramon, D.; Siddans, R.; Sogacheva, L.; Tanre, D.; Thomas, G. E.; Vountas, M.; Descloitres, J.; Griesfeller, J.; Kinne, S.; Schulz, M.; Pinnock, S.

    2013-08-01

    Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010-2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).

  3. Echocardiography in Infective Endocarditis: State of the Art.

    PubMed

    Afonso, Luis; Kottam, Anupama; Reddy, Vivek; Penumetcha, Anirudh

    2017-10-25

    In this review, we examine the central role of echocardiography in the diagnosis, prognosis, and management of infective endocarditis (IE). 2D transthoracic echocardiography (TTE) and transesophageal echocardiography TEE have complementary roles and are unequivocally the mainstay of diagnostic imaging in IE. The advent of 3D and multiplanar imaging have greatly enhanced the ability of the imager to evaluate cardiac structure and function. Technologic advances in 3D imaging allow for the reconstruction of realistic anatomic images that in turn have positively impacted IE-related surgical planning and intervention. CT and metabolic imaging appear to be emerging as promising ancillary diagnostic tools that could be deployed in select scenarios to circumvent some of the limitations of echocardiography. Our review summarizes the indispensable and central role of various echocardiographic modalities in the management of infective endocarditis. The complementary role of 2D TTE and TEE are discussed and areas where 3D TEE offers incremental value highlighted. An algorithm summarizing a contemporary approach to the workup of endocarditis is provided and major societal guidelines for timing of surgery are reviewed.

  4. Applications of fractional lower order S transform time frequency filtering algorithm to machine fault diagnosis

    PubMed Central

    Wang, Haibin; Zha, Daifeng; Li, Peng; Xie, Huicheng; Mao, Lili

    2017-01-01

    Stockwell transform(ST) time-frequency representation(ST-TFR) is a time frequency analysis method which combines short time Fourier transform with wavelet transform, and ST time frequency filtering(ST-TFF) method which takes advantage of time-frequency localized spectra can separate the signals from Gaussian noise. The ST-TFR and ST-TFF methods are used to analyze the fault signals, which is reasonable and effective in general Gaussian noise cases. However, it is proved that the mechanical bearing fault signal belongs to Alpha(α) stable distribution process(1 < α < 2) in this paper, even the noise also is α stable distribution in some special cases. The performance of ST-TFR method will degrade under α stable distribution noise environment, following the ST-TFF method fail. Hence, a new fractional lower order ST time frequency representation(FLOST-TFR) method employing fractional lower order moment and ST and inverse FLOST(IFLOST) are proposed in this paper. A new FLOST time frequency filtering(FLOST-TFF) algorithm based on FLOST-TFR method and IFLOST is also proposed, whose simplified method is presented in this paper. The discrete implementation of FLOST-TFF algorithm is deduced, and relevant steps are summarized. Simulation results demonstrate that FLOST-TFR algorithm is obviously better than the existing ST-TFR algorithm under α stable distribution noise, which can work better under Gaussian noise environment, and is robust. The FLOST-TFF method can effectively filter out α stable distribution noise, and restore the original signal. The performance of FLOST-TFF algorithm is better than the ST-TFF method, employing which mixed MSEs are smaller when α and generalized signal noise ratio(GSNR) change. Finally, the FLOST-TFR and FLOST-TFF methods are applied to analyze the outer race fault signal and extract their fault features under α stable distribution noise, where excellent performances can be shown. PMID:28406916

  5. Progress towards NASA MODIS and Suomi NPP Cloud Property Data Record Continuity

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Meyer, K.; Holz, R.; Ackerman, S. A.; Heidinger, A.; Wind, G.; Platnick, S. E.; Wang, C.; Marchant, B.; Frey, R.

    2017-12-01

    The Suomi NPP VIIRS imager provides an opportunity to extend the 17+ year EOS MODIS climate data record into the next generation operational era. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals. In addition, there is a significant mismatch in the spectral location of the 2.2 μm shortwave-infrared channels used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, a merged MODIS-VIIRS cloud record to serve the science community in the coming decades requires different algorithm approaches than those used for MODIS alone. This new approach includes two parallel efforts: (1) Imager-only algorithms with only spectral channels common to VIIRS and MODIS (i.e., eliminate use of MODIS CO2 and NIR/IR water vapor channels). Since the algorithms are run with similar spectral observations, they provide a basis for establishing a continuous cloud data record across the two imagers. (2) Merged imager and sounder measurements (i.e.., MODIS-AIRS, VIIRS-CrIS) in lieu of higher-spatial resolution MODIS absorption channels absent on VIIRS. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud mask (MOD35), optical and microphysical properties product (MOD06), and the NOAA AWG Cloud Height Algorithm (ACHA). We discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. In addition, we summarize efforts to reconcile apparent radiometric biases between analogous imager channels, a critical consideration for obtaining inter-sensor climate data record continuity.

  6. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  7. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.

  8. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  9. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  10. Unsymmetric Lanczos model reduction and linear state function observer for flexible structures

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1991-01-01

    This report summarizes part of the research work accomplished during the second year of a two-year grant. The research, entitled 'Application of Lanczos Vectors to Control Design of Flexible Structures' concerns various ways to use Lanczos vectors and Krylov vectors to obtain reduced-order mathematical models for use in the dynamic response analyses and in control design studies. This report presents a one-sided, unsymmetric block Lanczos algorithm for model reduction of structural dynamics systems with unsymmetric damping matrix, and a control design procedure based on the theory of linear state function observers to design low-order controllers for flexible structures.

  11. Parallel auto-correlative statistics with VTK.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  12. Ab initio quantum chemistry: methodology and applications.

    PubMed

    Friesner, Richard A

    2005-05-10

    This Perspective provides an overview of state-of-the-art ab initio quantum chemical methodology and applications. The methods that are discussed include coupled cluster theory, localized second-order Moller-Plesset perturbation theory, multireference perturbation approaches, and density functional theory. The accuracy of each approach for key chemical properties is summarized, and the computational performance is analyzed, emphasizing significant advances in algorithms and implementation over the past decade. Incorporation of a condensed-phase environment by means of mixed quantum mechanical/molecular mechanics or self-consistent reaction field techniques, is presented. A wide range of illustrative applications, focusing on materials science and biology, are discussed briefly.

  13. Generic construction of efficient matrix product operators

    NASA Astrophysics Data System (ADS)

    Hubig, C.; McCulloch, I. P.; Schollwöck, U.

    2017-01-01

    Matrix product operators (MPOs) are at the heart of the second-generation density matrix renormalization group (DMRG) algorithm formulated in matrix product state language. We first summarize the widely known facts on MPO arithmetic and representations of single-site operators. Second, we introduce three compression methods (rescaled SVD, deparallelization, and delinearization) for MPOs and show that it is possible to construct efficient representations of arbitrary operators using MPO arithmetic and compression. As examples, we construct powers of a short-ranged spin-chain Hamiltonian, a complicated Hamiltonian of a two-dimensional system and, as proof of principle, the long-range four-body Hamiltonian from quantum chemistry.

  14. Analysis about modeling MEC7000 excitation system of nuclear power unit

    NASA Astrophysics Data System (ADS)

    Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming

    2018-02-01

    Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.

  15. Obstetric Emergencies: Shoulder Dystocia and Postpartum Hemorrhage.

    PubMed

    Dahlke, Joshua D; Bhalwal, Asha; Chauhan, Suneet P

    2017-06-01

    Shoulder dystocia and postpartum hemorrhage represent two of the most common emergencies faced in obstetric clinical practice, both requiring prompt recognition and management to avoid significant morbidity or mortality. Shoulder dystocia is an uncommon, unpredictable, and unpreventable obstetric emergency and can be managed with appropriate intervention. Postpartum hemorrhage occurs more commonly and carries significant risk of maternal morbidity. Institutional protocols and algorithms for the prevention and management of shoulder dystocia and postpartum hemorrhage have become mainstays for clinicians. The goal of this review is to summarize the diagnosis, incidence, risk factors, and management of shoulder dystocia and postpartum hemorrhage. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Geoelectric monitoring at the Boulder magnetic observatory

    USGS Publications Warehouse

    Blum, Cletus; White, Tim; Sauter, Edward A.; Stewart, Duff; Bedrosian, Paul A.; Love, Jeffrey J.

    2017-01-01

    Despite its importance to a range of applied and fundamental studies, and obvious parallels to a robust network of magnetic-field observatories, long-term geoelectric field monitoring is rarely performed. The installation of a new geoelectric monitoring system at the Boulder magnetic observatory of the US Geological Survey is summarized. Data from the system are expected, among other things, to be used for testing and validating algorithms for mapping North American geoelectric fields. An example time series of recorded electric and magnetic fields during a modest magnetic storm is presented. Based on our experience, we additionally present operational aspects of a successful geoelectric field monitoring system.

  17. Cushing’s disease

    PubMed Central

    2012-01-01

    Cushing’s disease, or pituitary ACTH dependent Cushing’s syndrome, is a rare disease responsible for increased morbidity and mortality. Signs and symptoms of hypercortisolism are usually non specific: obesity, signs of protein wasting, increased blood pressure, variable levels of hirsutism. Diagnosis is frequently difficult, and requires a strict algorithm. First-line treatment is based on transsphenoidal surgery, which cures 80% of ACTH-secreting microadenomas. The rate of remission is lower in macroadenomas. Other therapeutic modalities including anticortisolic drugs, radiation techniques or bilateral adrenalectomy will thus be necessary to avoid long-term risks (metabolic syndrome, osteoporosis, cardiovascular disease) of hypercortisolism. This review summarizes potential pathophysiological mechanisms, diagnostic approaches, and therapies. PMID:22710101

  18. Hybrid Bearing Prognostic Test Rig

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Certo, Joseph M.; Handschuh, Robert F.; Dimofte, Florin

    2005-01-01

    The NASA Glenn Research Center has developed a new Hybrid Bearing Prognostic Test Rig to evaluate the performance of sensors and algorithms in predicting failures of rolling element bearings for aeronautics and space applications. The failure progression of both conventional and hybrid (ceramic rolling elements, metal races) bearings can be tested from fault initiation to total failure. The effects of different lubricants on bearing life can also be evaluated. Test conditions monitored and recorded during the test include load, oil temperature, vibration, and oil debris. New diagnostic research instrumentation will also be evaluated for hybrid bearing damage detection. This paper summarizes the capabilities of this new test rig.

  19. High speed, precision motion strategies for lightweight structures

    NASA Technical Reports Server (NTRS)

    Book, Wayne J.

    1989-01-01

    Research on space telerobotics is summarized. Adaptive control experiments on the Robotic Arm, Large and Flexible (RALF) were preformed and are documented, along with a joint controller design for the Small Articulated Manipulator (SAM), which is mounted on the RALF. A control algorithm is described as a robust decentralized adaptive control based on a bounded uncertainty approach. Dynamic interactions between SAM and RALF are examined. Unstability of the manipulator is studied from the perspective that the inertial forces generated could actually be used to more rapidly damp out the flexible manipulator's vibration. Currently being studied is the modeling of the constrained dynamics of flexible arms.

  20. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  1. Research on the FDTD method of scattering effects of obliquely incident electromagnetic waves in time-varying plasma sheath on collision and plasma frequencies

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Guo, Li-xin; Li, Jiang-ting

    2017-04-01

    This study analyzes the scattering characteristics of obliquely incident electromagnetic (EM) waves in a time-varying plasma sheath. The finite-difference time-domain algorithm is applied. According to the empirical formula of the collision frequency in a plasma sheath, the plasma frequency, temperature, and pressure are assumed to vary with time in the form of exponential rise. Some scattering problems of EM waves are discussed by calculating the radar cross section (RCS) of the time-varying plasma. The laws of the RCS varying with time are summarized at the L and S wave bands.

  2. An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.

    1979-01-01

    The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.

  3. NASA Tech Briefs, March 2013

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Topics covered include: Remote Data Access with IDL Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters Vectorized Rebinning Algorithm for Fast Data Down-Sampling Display Provides Pilots with Real-Time Sonic-Boom Information Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery Monitoring and Acquisition Real-time System (MARS) Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End Micro-Textured Black Silicon Wick for Silicon Heat Pipe Array Robust Multivariable Optimization and Performance Simulation for ASIC Design; Castable Amorphous Metal Mirrors and Mirror Assemblies; Sandwich Core Heat-Pipe Radiator for Power and Propulsion Systems; Apparatus for Pumping a Fluid; Cobra Fiber-Optic Positioner Upgrade; Improved Wide Operating Temperature Range of Li-Ion Cells; Non-Toxic, Non-Flammable, -80 C Phase Change Materials; Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization; Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models; Hand-Based Biometric Analysis; The Next Generation of Cold Immersion Dry Suit Design Evolution for Hypothermia Prevention; Integrated Lunar Information Architecture for Decision Support Version 3.0 (ILIADS 3.0); Relay Forward-Link File Management Services (MaROS Phase 2); Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent; XTCE GOVSAT Tool Suite 1.0; Determining Temperature Differential to Prevent Hardware Cross-Contamination in a Vacuum Chamber; SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws; Remote Data Exploration with the Interactive Data Language (IDL); Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals; Partitioned-Interval Quantum Optical Communications Receiver; and Practical UAV Optical Sensor Bench with Minimal Adjustability.

  4. Stereo-vision-based terrain mapping for off-road autonomous navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  5. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  6. Development, Comparisons and Evaluation of Aerosol Retrieval Algorithms

    NASA Astrophysics Data System (ADS)

    de Leeuw, G.; Holzer-Popp, T.; Aerosol-cci Team

    2011-12-01

    The Climate Change Initiative (cci) of the European Space Agency (ESA) has brought together a team of European Aerosol retrieval groups working on the development and improvement of aerosol retrieval algorithms. The goal of this cooperation is the development of methods to provide the best possible information on climate and climate change based on satellite observations. To achieve this, algorithms are characterized in detail as regards the retrieval approaches, the aerosol models used in each algorithm, cloud detection and surface treatment. A round-robin intercomparison of results from the various participating algorithms serves to identify the best modules or combinations of modules for each sensor. Annual global datasets including their uncertainties will then be produced and validated. The project builds on 9 existing algorithms to produce spectral aerosol optical depth (AOD and Ångström exponent) as well as other aerosol information; two instruments are included to provide the absorbing aerosol index (AAI) and stratospheric aerosol information. The algorithms included are: - 3 for ATSR (ORAC developed by RAL / Oxford university, ADV developed by FMI and the SU algorithm developed by Swansea University ) - 2 for MERIS (BAER by Bremen university and the ESA standard handled by HYGEOS) - 1 for POLDER over ocean (LOA) - 1 for synergetic retrieval (SYNAER by DLR ) - 1 for OMI retreival of the absorbing aerosol index with averaging kernel information (KNMI) - 1 for GOMOS stratospheric extinction profile retrieval (BIRA) The first seven algorithms aim at the retrieval of the AOD. However, each of the algorithms used differ in their approach, even for algorithms working with the same instrument such as ATSR or MERIS. To analyse the strengths and weaknesses of each algorithm several tests are made. The starting point for comparison and measurement of improvements is a retrieval run for 1 month, September 2008. The data from the same month are subsequently used for several runs with a prescribed set of aerosol models and an a priori data set derived from the median of AEROCOM model runs. The aerosol models and a priori data can be used in several ways, i.e. fully prescribed or with some freedom to choose a combination of aerosol models, based on the a priori or not. Another test gives insight in the effect of the cloud masks used, i.e. retrievals using the same cloud mask (the AATSR APOLLO cloud mask for collocated instruments) are compared with runs using the standard cloud masks. Tests to determine the influence of surface treatment are planned as well. The results of all these tests are evaluated by an independent team which compares the retrieval results with ground-based remote sensing (in particular AERONET) and in-situ data, and by a scoring method. Results are compared with other satellites such as MODIS and MISR. Blind tests using synthetic data are part of the algorithm characterization. The presentation will summarize results of the ongoing phase 1 inter-comparison and evaluation work within the Aerosol_cci project.

  7. Spectral methods to detect surface mines

    NASA Astrophysics Data System (ADS)

    Winter, Edwin M.; Schatten Silvious, Miranda

    2008-04-01

    Over the past five years, advances have been made in the spectral detection of surface mines under minefield detection programs at the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). The problem of detecting surface land mines ranges from the relatively simple, the detection of large anti-vehicle mines on bare soil, to the very difficult, the detection of anti-personnel mines in thick vegetation. While spatial and spectral approaches can be applied to the detection of surface mines, spatial-only detection requires many pixels-on-target such that the mine is actually imaged and shape-based features can be exploited. This method is unreliable in vegetated areas because only part of the mine may be exposed, while spectral detection is possible without the mine being resolved. At NVESD, hyperspectral and multi-spectral sensors throughout the reflection and thermal spectral regimes have been applied to the mine detection problem. Data has been collected on mines in forest and desert regions and algorithms have been developed both to detect the mines as anomalies and to detect the mines based on their spectral signature. In addition to the detection of individual mines, algorithms have been developed to exploit the similarities of mines in a minefield to improve their detection probability. In this paper, the types of spectral data collected over the past five years will be summarized along with the advances in algorithm development.

  8. Exploring the performance of large-N radio astronomical arrays

    NASA Astrophysics Data System (ADS)

    Lonsdale, Colin J.; Doeleman, Sheperd S.; Cappallo, Roger J.; Hewitt, Jacqueline N.; Whitney, Alan R.

    2000-07-01

    New radio telescope arrays are currently being contemplated which may be built using hundreds, or even thousands, of relatively small antennas. These include the One Hectare Telescope of the SETI Institute and UC Berkeley, the LOFAR telescope planned for the New Mexico desert surrounding the VLA, and possibly the ambitious international Square Kilometer Array (SKA) project. Recent and continuing advances in signal transmission and processing technology make it realistic to consider full cross-correlation of signals from such a large number of antennas, permitting the synthesis of an aperture with much greater fidelity than in the past. In principle, many advantages in instrumental performance are gained by this 'large-N' approach to the design, most of which require the development of new algorithms. Because new instruments of this type are expected to outstrip the performance of current instruments by wide margins, much of their scientific productivity is likely to come from the study of objects which are currently unknown. For this reason, instrumental flexibility is of special importance in design studies. A research effort has begun at Haystack Observatory to explore large-N performance benefits, and to determine what array design properties and data reduction algorithms are required to achieve them. The approach to these problems, involving a sophisticated data simulator, algorithm development, and exploration of array configuration parameter space, will be described, and progress to date will be summarized.

  9. Methods to assess an exercise intervention trial based on 3-level functional data.

    PubMed

    Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J

    2015-10-01

    Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Collaborative autonomous sensing with Bayesians in the loop

    NASA Astrophysics Data System (ADS)

    Ahmed, Nisar

    2016-10-01

    There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.

  11. A General-applications Direct Global Matrix Algorithm for Rapid Seismo-acoustic Wavefield Computations

    NASA Technical Reports Server (NTRS)

    Schmidt, H.; Tango, G. J.; Werby, M. F.

    1985-01-01

    A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.

  12. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  13. Control/structure interaction design methodology

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.; Layman, William E.

    1989-01-01

    The Control Structure Interaction Program is a technology development program for spacecraft that exhibit interactions between the control system and structural dynamics. The program objectives include development and verification of new design concepts (such as active structure) and new tools (such as a combined structure and control optimization algorithm) and their verification in ground and possibly flight test. The new CSI design methodology is centered around interdisciplinary engineers using new tools that closely integrate structures and controls. Verification is an important CSI theme and analysts will be closely integrated to the CSI Test Bed laboratory. Components, concepts, tools and algorithms will be developed and tested in the lab and in future Shuttle-based flight experiments. The design methodology is summarized in block diagrams depicting the evolution of a spacecraft design and descriptions of analytical capabilities used in the process. The multiyear JPL CSI implementation plan is described along with the essentials of several new tools. A distributed network of computation servers and workstations was designed that will provide a state-of-the-art development base for the CSI technologies.

  14. A primer to frequent itemset mining for bioinformatics

    PubMed Central

    Naulaerts, Stefan; Meysman, Pieter; Bittremieux, Wout; Vu, Trung Nghia; Vanden Berghe, Wim; Goethals, Bart

    2015-01-01

    Over the past two decades, pattern mining techniques have become an integral part of many bioinformatics solutions. Frequent itemset mining is a popular group of pattern mining techniques designed to identify elements that frequently co-occur. An archetypical example is the identification of products that often end up together in the same shopping basket in supermarket transactions. A number of algorithms have been developed to address variations of this computationally non-trivial problem. Frequent itemset mining techniques are able to efficiently capture the characteristics of (complex) data and succinctly summarize it. Owing to these and other interesting properties, these techniques have proven their value in biological data analysis. Nevertheless, information about the bioinformatics applications of these techniques remains scattered. In this primer, we introduce frequent itemset mining and their derived association rules for life scientists. We give an overview of various algorithms, and illustrate how they can be used in several real-life bioinformatics application domains. We end with a discussion of the future potential and open challenges for frequent itemset mining in the life sciences. PMID:24162173

  15. Area-to-point regression kriging for pan-sharpening

    NASA Astrophysics Data System (ADS)

    Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.

    2016-04-01

    Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.

  16. Immersed boundary methods for simulating fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Yang, Xiaolei

    2014-02-01

    Fluid-structure interaction (FSI) problems commonly encountered in engineering and biological applications involve geometrically complex flexible or rigid bodies undergoing large deformations. Immersed boundary (IB) methods have emerged as a powerful simulation tool for tackling such flows due to their inherent ability to handle arbitrarily complex bodies without the need for expensive and cumbersome dynamic re-meshing strategies. Depending on the approach such methods adopt to satisfy boundary conditions on solid surfaces they can be broadly classified as diffused and sharp interface methods. In this review, we present an overview of the fundamentals of both classes of methods with emphasis on solution algorithms for simulating FSI problems. We summarize and juxtapose different IB approaches for imposing boundary conditions, efficient iterative algorithms for solving the incompressible Navier-Stokes equations in the presence of dynamic immersed boundaries, and strong and loose coupling FSI strategies. We also present recent results from the application of such methods to study a wide range of problems, including vortex-induced vibrations, aquatic swimming, insect flying, human walking and renewable energy. Limitations of such methods and the need for future research to mitigate them are also discussed.

  17. Interactive Data Exploration with Smart Drill-Down

    PubMed Central

    Joglekar, Manas; Garcia-Molina, Hector; Parameswaran, Aditya

    2017-01-01

    We present smart drill-down, an operator for interactively exploring a relational table to discover and summarize “interesting” groups of tuples. Each group of tuples is described by a rule. For instance, the rule (a, b, ⋆, 1000) tells us that there are a thousand tuples with value a in the first column and b in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are NP-Hard, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms. PMID:28210096

  18. Characterization of mesoscale convective systems over the eastern Pacific during boreal summer

    NASA Astrophysics Data System (ADS)

    Berthet, Sarah; Rouquié, Bastien; Roca, Rémy

    2015-04-01

    The eastern Pacific Ocean is one of the most active tropical disturbances formation regions on earth. This preliminary study is part of a broader project that aims to investigate how mesoscale convective systems (MCS) may be related to these synoptic disturbances with emphasis on local initiation of tropical depressions. As a first step, the main characteristics of the MCS over the eastern Pacific are documented with the help of the recently developed TOOCAN tracking algorithm (Fiolleau and Roca, 2013) applied to the infrared satellite imagery data from GOES-W and -E for the period JJAS 2012-2014. More specifically, the spatial distribution of the MCS population, the statistics of their spatial extensions and durations, as well as their trajectories and propagation speeds are summarized. In addition the environment of the MCS will be investigated using various Global Precipitation Mission datasets and the Megha-Tropiques/SAPHIR humidity microwave sounder derived products. Reference: Fiolleau T. and R. Roca, (2013), An Algorithm For The Detection And Tracking Of Tropical Mesoscale Convective Systems Using Infrared Images From Geostationary Satellite, Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2012.2227762.

  19. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. Volume 1; Overviews (subsystem 0)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Cess, Robert D.; Charlock, Thomas P.; Coakley, James A.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 1 provides both summarized and detailed overviews of the CERES Release 1 data analysis system. CERES will produce global top-of-the-atmosphere shortwave and longwave radiative fluxes at the top of the atmosphere, at the surface, and within the atmosphere by using the combination of a large variety of measurements and models. The CERES processing system includes radiance observations from CERES scanning radiometers, cloud properties derived from coincident satellite imaging radiometers, temperature and humidity fields from meteorological analysis models, and high-temporal-resolution geostationary satellite radiances to account for unobserved times. CERES will provide a continuation of the ERBE record and the lowest error climatology of consistent cloud properties and radiation fields. CERES will also substantially improve our knowledge of the Earth's surface radiation budget.

  20. Adapting Semantic Natural Language Processing Technology to Address Information Overload in Influenza Epidemic Management

    PubMed Central

    Keselman, Alla; Rosemblat, Graciela; Kilicoglu, Halil; Fiszman, Marcelo; Jin, Honglan; Shin, Dongwook; Rindflesch, Thomas C.

    2013-01-01

    Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

  1. Learning to rank-based gene summary extraction.

    PubMed

    Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei

    2014-01-01

    In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.

  2. Application of Shuffled Frog Leaping Algorithm and Genetic Algorithm for the Optimization of Urban Stormwater Drainage

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Kaushal, D. R.; Gosain, A. K.

    2017-12-01

    Urban hydrology will have an increasing role to play in the sustainability of human settlements. Expansion of urban areas brings significant changes in physical characteristics of landuse. Problems with administration of urban flooding have their roots in concentration of population within a relatively small area. As watersheds are urbanized, infiltration decreases, pattern of surface runoff is changed generating high peak flows, large runoff volumes from urban areas. Conceptual rainfall-runoff models have become a foremost tool for predicting surface runoff and flood forecasting. Manual calibration is often time consuming and tedious because of the involved subjectivity, which makes automatic approach more preferable. The calibration of parameters usually includes numerous criteria for evaluating the performances with respect to the observed data. Moreover, derivation of objective function assosciat6ed with the calibration of model parameters is quite challenging. Various studies dealing with optimization methods has steered the embracement of evolution based optimization algorithms. In this paper, a systematic comparison of two evolutionary approaches to multi-objective optimization namely shuffled frog leaping algorithm (SFLA) and genetic algorithms (GA) is done. SFLA is a cooperative search metaphor, stimulated by natural memetics based on the population while, GA is based on principle of survival of the fittest and natural evolution. SFLA and GA has been employed for optimizing the major parameters i.e. width, imperviousness, Manning's coefficient and depression storage for the highly urbanized catchment of Delhi, India. The study summarizes the auto-tuning of a widely used storm water management model (SWMM), by internal coupling of SWMM with SFLA and GA separately. The values of statistical parameters such as, Nash-Sutcliffe efficiency (NSE) and Percent Bias (PBIAS) were found to lie within the acceptable limit, indicating reasonably good model performance. Overall, this study proved promising for assessing risk in urban drainage systems and should prove useful to improve integrity of the urban system, its reliability and provides guidance for inundation preparedness.Keywords: Hydrologic model, SWMM, Urbanization, SFLA and GA.

  3. The application of prototype point processes for the summary and description of California wildfires

    USGS Publications Warehouse

    Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.

    2011-01-01

    A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.

  4. A survey on deep learning in medical image analysis.

    PubMed

    Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I

    2017-12-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  6. IT as an enabler of sustainable use of data from innovative technical components for assisted living.

    PubMed

    Knaup, Petra; Schöpe, Lothar

    2012-01-01

    The authors see the major potential of systematically processing data from AAL-technology in higher sustainability, higher technology acceptance, higher security, higher robustness, higher flexibility and better integration in existing structures and processes. This potential is currently underachieved and not yet systematically promoted. The authors have written a position paper on potential and necessity of substantial IT research enhancing Ambient Assisted Living (AAL) applications. This paper summarizes the most important challenges in the fields health care, data protection, operation and user interfaces. Research in medical informatics is necessary among others in the fields flexible authorization concept, medical information needs, algorithms to evaluate user profiles and visualization of aggregated data.

  7. Extensions and improvements on XTRAN3S

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.

  8. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  9. A Review of Industrial Heat Exchange Optimization

    NASA Astrophysics Data System (ADS)

    Yao, Junjie

    2018-01-01

    Heat exchanger is an energy exchange equipment, it transfers the heat from a working medium to another working medium, which has been wildly used in petrochemical industry, HVAC refrigeration, aerospace and so many other fields. The optimal design and efficient operation of the heat exchanger and heat transfer network are of great significance to the process industry to realize energy conservation, production cost reduction and energy consumption reduction. In this paper, the optimization of heat exchanger, optimal algorithm and heat exchanger optimization with different objective functions are discussed. Then, optimization of the heat exchanger and the heat exchanger network considering different conditions are compared and analysed. Finally, all the problems discussed are summarized and foresights are proposed.

  10. Cybersemiotics: a transdisciplinary framework for information studies.

    PubMed

    Brier, S

    1998-04-01

    This paper summarizes recent attempts by this author to create a transdisciplinary, non-Cartesian and non-reductionistic framework for information studies in natural, social, and technological systems. To confront, in a scientific way, the problems of modern information technology where phenomenological man is dealing with socially constructed texts in algorithmically based digital bit-machines we need a theoretical framework spanning from physics over biology and technological design to phenomenological and social production of signification and meaning. I am working with such pragmatic theories as second order cybernetics (coupled with autopolesis theory), Lakoffs biologically oriented cognitive semantics, Peirce's triadic semiotics, and Wittgenstein's pragmatic language game theory. A coherent synthesis of these theories is what the cybersemiotic framework attempts to accomplish.

  11. Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.

    PubMed

    Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei

    2008-05-01

    Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.

  12. Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study

    NASA Astrophysics Data System (ADS)

    Kotchenova, Svetlana Y.; Vermote, Eric F.; Levy, Robert; Lyapustin, Alexei

    2008-05-01

    Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.

  13. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  14. In Situ and In Vivo Molecular Analysis by Coherent Raman Scattering Microscopy

    PubMed Central

    Liao, Chien-Sheng; Cheng, Ji-Xin

    2017-01-01

    Coherent Raman scattering (CRS) microscopy is a high-speed vibrational imaging platform with the ability to visualize the chemical content of a living specimen by using molecular vibrational fingerprints. We review technical advances and biological applications of CRS microscopy. The basic theory of CRS and the state-of-the-art instrumentation of a CRS microscope are presented. We further summarize and compare the algorithms that are used to separate the Raman signal from the nonresonant background, to denoise a CRS image, and to decompose a hyperspectral CRS image into concentration maps of principal components. Important applications of single-frequency and hyperspectral CRS microscopy are highlighted. Potential directions of CRS microscopy are discussed. PMID:27306307

  15. geoknife: Reproducible web-processing of large gridded datasets

    USGS Publications Warehouse

    Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.

    2016-01-01

    Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.

  16. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  17. Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening

    NASA Technical Reports Server (NTRS)

    Diskin, Boris

    1999-01-01

    This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).

  18. An algorithmic and information-theoretic approach to multimetric index construction

    USGS Publications Warehouse

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Guntenspergen, Glenn R.; Mitchell, Brian R.; Miller, Kathryn M.; Little, Amanda M.

    2013-01-01

    The use of multimetric indices (MMIs), such as the widely used index of biological integrity (IBI), to measure, track, summarize and infer the overall impact of human disturbance on biological communities has been steadily growing in recent years. Initially, MMIs were developed for aquatic communities using pre-selected biological metrics as indicators of system integrity. As interest in these bioassessment tools has grown, so have the types of biological systems to which they are applied. For many ecosystem types the appropriate biological metrics to use as measures of biological integrity are not known a priori. As a result, a variety of ad hoc protocols for selecting metrics empirically has developed. However, the assumptions made by proposed protocols have not be explicitly described or justified, causing many investigators to call for a clear, repeatable methodology for developing empirically derived metrics and indices that can be applied to any biological system. An issue of particular importance that has not been sufficiently addressed is the way that individual metrics combine to produce an MMI that is a sensitive composite indicator of human disturbance. In this paper, we present and demonstrate an algorithm for constructing MMIs given a set of candidate metrics and a measure of human disturbance. The algorithm uses each metric to inform a candidate MMI, and then uses information-theoretic principles to select MMIs that capture the information in the multidimensional system response from among possible MMIs. Such an approach can be used to create purely empirical (data-based) MMIs or can, optionally, be influenced by expert opinion or biological theory through the use of a weighting vector to create value-weighted MMIs. We demonstrate the algorithm with simulated data to demonstrate the predictive capacity of the final MMIs and with real data from wetlands from Acadia and Rocky Mountain National Parks. For the Acadia wetland data, the algorithm identified 4 metrics that combined to produce a -0.88 correlation with the human disturbance index. When compared to other methods, we find this algorithmic approach resulted in MMIs that were more predictive and comprise fewer metrics.

  19. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611

  20. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm

    PubMed Central

    Kitchen, Levi; Lawrence, Matthew; Speicher, Matthew; Frumkin, Kenneth

    2016-01-01

    Introduction Unilateral leg swelling with suspicion of deep venous thrombosis (DVT) is a common emergency department (ED) presentation. Proximal DVT (thrombus in the popliteal or femoral veins) can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT) often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS), a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1) propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS) is unavailable; and (2) summarize the controversy surrounding IC-DVT treatment. Discussion The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. Conclusion When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient’s risk factors for both thrombus propagation and complications of anticoagulation. PMID:27429688

  1. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm.

    PubMed

    Kitchen, Levi; Lawrence, Matthew; Speicher, Matthew; Frumkin, Kenneth

    2016-07-01

    Unilateral leg swelling with suspicion of deep venous thrombosis (DVT) is a common emergency department (ED) presentation. Proximal DVT (thrombus in the popliteal or femoral veins) can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT) often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS), a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1) propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS) is unavailable; and (2) summarize the controversy surrounding IC-DVT treatment. The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient's risk factors for both thrombus propagation and complications of anticoagulation.

  2. Chin plate with a detachable C-tube head serves for both osteotomy fixation and orthodontic anchorage.

    PubMed

    Seo, Kyung-Won; Nahm, Kyung-Yen; Kim, Seong-Hun; Chung, Kyu-Rhim; Nelson, Gerald

    2013-07-01

    This article reports the dual function of a double-Y miniplate with a detachable C-tube head (C-chin plate; Jin Biomed Co., Bucheon, Korea) used to fixate an anterior segmental osteotomy and provide skeletal anchorage during orthodontic tooth movement. Cases were selected for this study from patients who underwent anterior segmental osteotomy under local anesthesia. A detachable C-tube head portion was combined with a double-Y chin plate. The double-Y chin plates were fixated between the osteotomy segments and the mandibular base with screws in a conventional way. The C-tube head portion exited the tissue near the mucogingival junction. Biocreative Chin Plates were placed on the anterior segmental osteotomy sites. The device allowed 3 points of fixation: 1, minor postosteotomy vertical adjustment of the segment during healing; 2, minor shift of the midline during healing; and 3, to serve as temporary skeletal anchorage device during the post-anterior segmental osteotomy orthodontic treatment. When tooth movement goals are accomplished, the C-tube head of the chin plate can be easily detached from the fixation miniplate by twisting the head using a Weingart plier under local anesthesia. This dual-purpose device spares the patient from the need for 2 separate installations for stabilization of osteotomy segments. The dual-purpose double-Y miniplate combined with a C-tube head (Biocreative Chin Plate) provided versatile application of 3 points of post-osteotomy fixation and of temporary skeletal anchorage for orthodontic tooth movement.

  3. The prognostic value of visually assessing enamel microcracks: Do debonding and adhesive removal contribute to their increase?

    PubMed

    Dumbryte, Irma; Jonavicius, Tomas; Linkeviciene, Laura; Linkevicius, Tomas; Peciuliene, Vytaute; Malinauskas, Mangirdas

    2016-05-01

    To find a correlation between the severity of enamel microcracks (EMCs) and their increase during debonding and residual adhesive removal (RAR). Following their examination with scanning electron microscopy (SEM), 90 extracted human premolars were divided into three groups of 30: group 1, teeth having pronounced EMCs (visible with the naked eye under normal room illumination); group 2, teeth showing weak EMCs (not apparent under normal room illumination but visible by SEM); and group 3, a control group. EMCs have been classified into weak and pronounced, based on their visibility. Metal brackets (MB) and ceramic brackets (CB), 15 of each type, were bonded to all the teeth from groups 1 and 2. Debonding was performed with pliers, followed by RAR. The location, length, and width of the longest EMCs were measured using SEM before and after debonding. The mean overall width (Woverall) was higher for pronounced EMCs before and after debonding CB (P < .05), and after the removal of MB. Pronounced EMCs showed greater length values using both types of brackets. After debonding, the increase in Woverall of pronounced EMCs was 0.57 µm with MB (P < .05) and 0.30 µm with CB; for weak EMCs, - 0.32 µm with MB and 0.30 µm with CB. Although the teeth having pronounced EMCs showed higher width and length values, this did not predispose to greater EMCs increase after debonding MB and CB followed by RAR.

  4. In vitro comparison of debonding force and intrapulpal temperature changes during ceramic orthodontic bracket removal using a carbon dioxide laser.

    PubMed

    Ma, T; Marangoni, R D; Flint, W

    1997-02-01

    The aim of this study was to develop a method to reduce the fracture of ceramic orthodontics brackets during debonding procedures. Lasers have been used to thermally soften the bonding resin, which reduces the tensile debonding force. Thermal effects of lasers may create adverse effects to the dental pulp. Previous studies have shown that no pulpal injury occurs when the maximum intrapulpal temperature rise stayed below 2 degrees C. This study investigated the effect of lasing time on intrapulpal temperature increase and tensile debonding force with a 18 watt carbon dioxide laser. Ceramic brackets were bonded to mandibular deciduous bovine teeth and human mandibular first premolars with a photoactivated bonding resin. Modified debonding pliers was used to accurately position the laser beam onto the ceramic bracket. Lasing time required to keep the maximum intrapulpal temperature rise below 2 degrees C was determined by the use of thermocouples inserted into the pulp chambers of the specimens. A tensile debonding force was applied on the control group without lasing and the experimental group was debonded after applying a predetermined lasing time with a carbon dioxide laser. It was found that there was a significance difference (P < 0.05) in tensile debonding force between the control group and the experimental group. It is feasible to use a laser for the debonding of ceramic brackets while keeping the intrapulpal temperature rise below the threshold of pulpal damage.

  5. Histologic investigation of the human pulp after thermodebonding of metal and ceramic brackets.

    PubMed

    Jost-Brinkmann, P G; Stein, H; Miethke, R R; Nakata, M

    1992-11-01

    Twenty-five human permanent teeth scheduled for extraction for orthodontic reasons were used to study the effect of thermodebonding on the pulp tissue. One week before brackets were removed the teeth were bonded with either metal or ceramic brackets, with two alternative adhesives. For debonding, three different techniques were used: (1) debonding of ceramic brackets warmed up indirectly by resistance heating of a metallic bow applied to the bracket slot, (2) debonding of metal brackets warmed up directly by inductive heating of the bracket itself, and (3) debonding of ceramic brackets warmed up indirectly by inductive heating of metallic plier tips, applied to the mesial and distal bracket surfaces. Teeth with metal brackets removed without heat by squeezing the wings together served as a control group. The teeth were extracted 24 hours after debonding and subjected to a light microscopic study after histologic preparation and staining. In addition, the location of adhesive remnants was evaluated. While the thermodebonding of metal brackets worked properly and without any obvious pulp damage, there were problems related to the thermodebonding of ceramic brackets: (1) if more than one heating cycle was necessary, several teeth showed localized damage of the pulp with slight infiltration of inflammatory cells, (2) bracket fractures occurred frequently, and enamel damage could be shown, and (3) often with Transbond (Unitek/3M, Monrovia, Calif.) as the adhesive, more than one heating cycle was necessary for bracket removal, and thus patients complained about pain.

  6. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  7. Mammographic images segmentation based on chaotic map clustering algorithm

    PubMed Central

    2014-01-01

    Background This work investigates the applicability of a novel clustering approach to the segmentation of mammographic digital images. The chaotic map clustering algorithm is used to group together similar subsets of image pixels resulting in a medically meaningful partition of the mammography. Methods The image is divided into pixels subsets characterized by a set of conveniently chosen features and each of the corresponding points in the feature space is associated to a map. A mutual coupling strength between the maps depending on the associated distance between feature space points is subsequently introduced. On the system of maps, the simulated evolution through chaotic dynamics leads to its natural partitioning, which corresponds to a particular segmentation scheme of the initial mammographic image. Results The system provides a high recognition rate for small mass lesions (about 94% correctly segmented inside the breast) and the reproduction of the shape of regions with denser micro-calcifications in about 2/3 of the cases, while being less effective on identification of larger mass lesions. Conclusions We can summarize our analysis by asserting that due to the particularities of the mammographic images, the chaotic map clustering algorithm should not be used as the sole method of segmentation. It is rather the joint use of this method along with other segmentation techniques that could be successfully used for increasing the segmentation performance and for providing extra information for the subsequent analysis stages such as the classification of the segmented ROI. PMID:24666766

  8. Educational Data Mining Application for Estimating Students Performance in Weka Environment

    NASA Astrophysics Data System (ADS)

    Gowri, G. Shiyamala; Thulasiram, Ramasamy; Amit Baburao, Mahindra

    2017-11-01

    Educational data mining (EDM) is a multi-disciplinary research area that examines artificial intelligence, statistical modeling and data mining with the data generated from an educational institution. EDM utilizes computational ways to deal with explicate educational information keeping in mind the end goal to examine educational inquiries. To make a country stand unique among the other nations of the world, the education system has to undergo a major transition by redesigning its framework. The concealed patterns and data from various information repositories can be extracted by adopting the techniques of data mining. In order to summarize the performance of students with their credentials, we scrutinize the exploitation of data mining in the field of academics. Apriori algorithmic procedure is extensively applied to the database of students for a wider classification based on various categorizes. K-means procedure is applied to the same set of databases in order to accumulate them into a specific category. Apriori algorithm deals with mining the rules in order to extract patterns that are similar along with their associations in relation to various set of records. The records can be extracted from academic information repositories. The parameters used in this study gives more importance to psychological traits than academic features. The undesirable student conduct can be clearly witnessed if we make use of information mining frameworks. Thus, the algorithms efficiently prove to profile the students in any educational environment. The ultimate objective of the study is to suspect if a student is prone to violence or not.

  9. A new theory of development: the generation of complexity in ontogenesis.

    PubMed

    Barbieri, Marcello

    2016-03-13

    Today there is a very wide consensus on the idea that embryonic development is the result of a genetic programme and of epigenetic processes. Many models have been proposed in this theoretical framework to account for the various aspects of development, and virtually all of them have one thing in common: they do not acknowledge the presence of organic codes (codes between organic molecules) in ontogenesis. Here it is argued instead that embryonic development is a convergent increase in complexity that necessarily requires organic codes and organic memories, and a few examples of such codes are described. This is the code theory of development, a theory that was originally inspired by an algorithm that is capable of reconstructing structures from incomplete information, an algorithm that here is briefly summarized because it makes it intuitively appealing how a convergent increase in complexity can be achieved. The main thesis of the new theory is that the presence of organic codes in ontogenesis is not only a theoretical necessity but, first and foremost, an idea that can be tested and that has already been found to be in agreement with the evidence. © 2016 The Author(s).

  10. Application of Genetic Algorithm for Discovery of Core Effective Formulae in TCM Clinical Data

    PubMed Central

    Yang, Ming; Poon, Josiah; Wang, Shaomo; Jiao, Lijing; Poon, Simon; Cui, Lizhi; Chen, Peiqi; Sze, Daniel Man-Yuen; Xu, Ling

    2013-01-01

    Research on core and effective formulae (CEF) does not only summarize traditional Chinese medicine (TCM) treatment experience, it also helps to reveal the underlying knowledge in the formulation of a TCM prescription. In this paper, CEF discovery from tumor clinical data is discussed. The concepts of confidence, support, and effectiveness of the CEF are defined. Genetic algorithm (GA) is applied to find the CEF from a lung cancer dataset with 595 records from 161 patients. The results had 9 CEF with positive fitness values with 15 distinct herbs. The CEF have all had relative high average confidence and support. A herb-herb network was constructed and it shows that all the herbs in CEF are core herbs. The dataset was divided into CEF group and non-CEF group. The effective proportions of former group are significantly greater than those of latter group. A Synergy index (SI) was defined to evaluate the interaction between two herbs. There were 4 pairs of herbs with high SI values to indicate the synergy between the herbs. All the results agreed with the TCM theory, which demonstrates the feasibility of our approach. PMID:24288577

  11. Existing Fortran interfaces to Trilinos in preparation for exascale ForTrilinos development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J.; Young, Mitchell T.; Collins, Benjamin S.

    This report summarizes the current state of Fortran interfaces to the Trilinos library within several key applications of the Exascale Computing Program (ECP), with the aim of informing developers about strategies to develop ForTrilinos, an exascale-ready, Fortran interface software package within Trilinos. The two software projects assessed within are the DOE Office of Science's Accelerated Climate Model for Energy (ACME) atmosphere component, CAM, and the DOE Office of Nuclear Energy's core-simulator portion of VERA, a nuclear reactor simulation code. Trilinos is an object-oriented, C++ based software project, and spans a collection of algorithms and other enabling technologies such as uncertaintymore » quantification and mesh generation. To date, Trilinos has enabled these codes to achieve large-scale simulation results, however the simulation needs of CAM and VERA-CS will approach exascale over the next five years. A Fortran interface to Trilinos that enables efficient use of programming models and more advanced algorithms is necessary. Where appropriate, the needs of the CAM and VERA-CS software to achieve their simulation goals are called out specifically. With this report, a design document and execution plan for ForTrilinos development can proceed.« less

  12. Simulation of short period Lg, expansion of three-dimensional source simulation capabilities and simulation of near-field ground motion from the 1971 San Fernando, California, earthquake. Final report 1 Oct 79-30 Nov 80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, T.C.; Swanger, H.J.; Shkoller, B.

    1981-07-01

    This report summarizes three efforts performed during the past fiscal year. The first these efforts is a study of the theoretical behavior of the regional seismic phase Lg in various tectonic provinces. Synthetic seismograms are used to determine the sensitivity of Lg to source and medium properties. The primary issues addressed concern the relationship of regional Lg characteristics to the crustal attenuation properties, the comparison of the Lg in many crustal structures and the source depth dependence of Lg. The second effort described is an expansion of hte capabilities of the three-dimensional finite difference code TRES. The present capabilities aremore » outlined with comparisons of the performance of the code on three computer systems. The last effort described is the development of an algorithm for simulation of the near-field ground motions from the 1971 San Fernando, California, earthquake. A computer code implementing this algorithm has been provided to the Mission Research Corporation foe simulation of the acoustic disturbances from such an earthquake.« less

  13. [Lake eutrophication modeling in considering climatic factors change: a review].

    PubMed

    Su, Jie-Qiong; Wang, Xuan; Yang, Zhi-Feng

    2012-11-01

    Climatic factors are considered as the key factors affecting the trophic status and its process in most lakes. Under the background of global climate change, to incorporate the variations of climatic factors into lake eutrophication models could provide solid technical support for the analysis of the trophic evolution trend of lake and the decision-making of lake environment management. This paper analyzed the effects of climatic factors such as air temperature, precipitation, sunlight, and atmosphere on lake eutrophication, and summarized the research results about the lake eutrophication modeling in considering in considering climatic factors change, including the modeling based on statistical analysis, ecological dynamic analysis, system analysis, and intelligent algorithm. The prospective approaches to improve the accuracy of lake eutrophication modeling with the consideration of climatic factors change were put forward, including 1) to strengthen the analysis of the mechanisms related to the effects of climatic factors change on lake trophic status, 2) to identify the appropriate simulation models to generate several scenarios under proper temporal and spatial scales and resolutions, and 3) to integrate the climatic factors change simulation, hydrodynamic model, ecological simulation, and intelligent algorithm into a general modeling system to achieve an accurate prediction of lake eutrophication under climatic change.

  14. Decision-making for complex scapula and ipsilateral clavicle fractures: a review.

    PubMed

    Hess, Florian; Zettl, Ralph; Smolen, Daniel; Knoth, Christoph

    2018-03-23

    Complex scapula with ipsilateral clavicle fracures remains a challange and treatment recommendations are still missing.  This review provides an overview of the evolution of the definition, classification and treatment strategies for complex scapula and ipsilateral clavicle fractures. As with other rare conditions, consensus has not been reached on the most suitable management strategies to treat these patients. The aim of this review is twofold: to compile and summarize the currently available literature on this topic, and to recommend treatment approaches. Included in the review are the following topics: biomechanics of scapula and ipsilateral clavicle fractures, preoperative radiological evaluation, surgical treatment of the clavicle only, surgical treatment of both the clavicle and scapula, and nonsurgical treatment options. A decision-making algorithm is proposed for different treatment strategies based on pre-operative parameters, and an example of a case treated our institution is presented to illustrate use of the algorithm. The role of instability in complex scapula with ipsilateral clavicle fractures remains unclear. The question of stability is preoperatively less relevant than the question of whether the dislocated fragments lead to compromised shoulder function.

  15. A fast forward algorithm for real-time geosteering of azimuthal gamma-ray logging.

    PubMed

    Qin, Zhen; Pan, Heping; Wang, Zhonghao; Wang, Bintao; Huang, Ke; Liu, Shaohua; Li, Gang; Amara Konaté, Ahmed; Fang, Sinan

    2017-05-01

    Geosteering is an effective method to increase the reservoir drilling rate in horizontal wells. Based on the features of an azimuthal gamma-ray logging tool and strata spatial location, a fast forward calculation method of azimuthal gamma-ray logging is deduced by using the natural gamma ray distribution equation in formation. The response characteristics of azimuthal gamma-ray logging while drilling in the layered formation models with different thickness and position are simulated and summarized by using the method. The result indicates that the method calculates quickly, and when the tool nears a boundary, the method can be used to identify the boundary and determine the distance from the logging tool to the boundary in time. Additionally, the formation parameters of the algorithm in the field can be determined after a simple method is proposed based on the information of an offset well. Therefore, the forward method can be used for geosteering in the field. A field example validates that the forward method can be used to determine the distance from the azimuthal gamma-ray logging tool to the boundary for geosteering in real-time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Infrared Spectroscopic Imaging: The Next Generation

    PubMed Central

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  17. B-cell Ligand Processing Pathways Detected by Large-scale Comparative Analysis

    PubMed Central

    Towfic, Fadi; Gupta, Shakti; Honavar, Vasant; Subramaniam, Shankar

    2012-01-01

    The initiation of B-cell ligand recognition is a critical step for the generation of an immune response against foreign bodies. We sought to identify the biochemical pathways involved in the B-cell ligand recognition cascade and sets of ligands that trigger similar immunological responses. We utilized several comparative approaches to analyze the gene coexpression networks generated from a set of microarray experiments spanning 33 different ligands. First, we compared the degree distributions of the generated networks. Second, we utilized a pairwise network alignment algorithm, BiNA, to align the networks based on the hubs in the networks. Third, we aligned the networks based on a set of KEGG pathways. We summarized our results by constructing a consensus hierarchy of pathways that are involved in B cell ligand recognition. The resulting pathways were further validated through literature for their common physiological responses. Collectively, the results based on our comparative analyses of degree distributions, alignment of hubs, and alignment based on KEGG pathways provide a basis for molecular characterization of the immune response states of B-cells and demonstrate the power of comparative approaches (e.g., gene coexpression network alignment algorithms) in elucidating biochemical pathways involved in complex signaling events in cells. PMID:22917187

  18. Simulation-based planning for theater air warfare

    NASA Astrophysics Data System (ADS)

    Popken, Douglas A.; Cox, Louis A., Jr.

    2004-08-01

    Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.

  19. An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System

    NASA Technical Reports Server (NTRS)

    Fuhrmann, Henri D.; Stewart, Eric C.

    1996-01-01

    Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.

  20. Algorithms and physical parameters involved in the calculation of model stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Merlo, D. C.

    This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.

  1. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  2. Visual to Parametric Interaction (V2PI)

    PubMed Central

    Maiti, Dipayan; Endert, Alex; North, Chris

    2013-01-01

    Typical data visualizations result from linear pipelines that start by characterizing data using a model or algorithm to reduce the dimension and summarize structure, and end by displaying the data in a reduced dimensional form. Sensemaking may take place at the end of the pipeline when users have an opportunity to observe, digest, and internalize any information displayed. However, some visualizations mask meaningful data structures when model or algorithm constraints (e.g., parameter specifications) contradict information in the data. Yet, due to the linearity of the pipeline, users do not have a natural means to adjust the displays. In this paper, we present a framework for creating dynamic data displays that rely on both mechanistic data summaries and expert judgement. The key is that we develop both the theory and methods of a new human-data interaction to which we refer as “ Visual to Parametric Interaction” (V2PI). With V2PI, the pipeline becomes bi-directional in that users are embedded in the pipeline; users learn from visualizations and the visualizations adjust to expert judgement. We demonstrate the utility of V2PI and a bi-directional pipeline with two examples. PMID:23555552

  3. Plant Condition Remote Monitoring Technique

    NASA Technical Reports Server (NTRS)

    Fotedar, L. K.; Krishen, K.

    1996-01-01

    This paper summarizes the results of a radiation transfer study conducted on houseplants using controlled environmental conditions. These conditions included: (1) air and soil temperature; (2) incident and reflected radiation; and (3) soil moisture. The reflectance, transmittance, and emittance measurements were conducted in six spectral bands: microwave, red, yellow, green, violet and infrared, over a period of three years. Measurements were taken on both healthy and diseased plants. The data was collected on plants under various conditions which included: variation in plant bio-mass, diurnal variation, changes in plant pathological conditions (including changes in water content), different plant types, various disease types, and incident light wavelength or color. Analysis of this data was performed to yield an algorithm for plant disease from the remotely sensed data.

  4. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    PubMed

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  5. Advances and trends in the development of computational models for tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Tanner, J. A.

    1985-01-01

    Status and some recent developments of computational models for tires are summarized. Discussion focuses on a number of aspects of tire modeling and analysis including: tire materials and their characterization; evolution of tire models; characteristics of effective finite element models for analyzing tires; analysis needs for tires; and impact of the advances made in finite element technology, computational algorithms, and new computing systems on tire modeling and analysis. An initial set of benchmark problems has been proposed in concert with the U.S. tire industry. Extensive sets of experimental data will be collected for these problems and used for evaluating and validating different tire models. Also, the new Aircraft Landing Dynamics Facility (ALDF) at NASA Langley Research Center is described.

  6. Simulator for Microlens Planet Surveys

    NASA Astrophysics Data System (ADS)

    Ipatov, Sergei I.; Horne, Keith; Alsubai, Khalid A.; Bramich, Daniel M.; Dominik, Martin; Hundertmark, Markus P. G.; Liebig, Christine; Snodgrass, Colin D. B.; Street, Rachel A.; Tsapras, Yiannis

    2014-04-01

    We summarize the status of a computer simulator for microlens planet surveys. The simulator generates synthetic light curves of microlensing events observed with specified networks of telescopes over specified periods of time. Particular attention is paid to models for sky brightness and seeing, calibrated by fitting to data from the OGLE survey and RoboNet observations in 2011. Time intervals during which events are observable are identified by accounting for positions of the Sun and the Moon, and other restrictions on telescope pointing. Simulated observations are then generated for an algorithm that adjusts target priorities in real time with the aim of maximizing planet detection zone area summed over all the available events. The exoplanet detection capability of observations was compared for several telescopes.

  7. Decision making and problem solving with computer assistance

    NASA Technical Reports Server (NTRS)

    Kraiss, F.

    1980-01-01

    In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.

  8. Autoclass: An automatic classification system

    NASA Technical Reports Server (NTRS)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  9. The Security Email Based on Smart Card

    NASA Astrophysics Data System (ADS)

    Lina, Zhang; Jiang, Meng Hai.

    Email has become one of the most important communication tools in modern internet society, and its security is an important issue that can't be ignored. The security requirements of Email can be summarized as confidentiality, integrity, authentication and non-repudiation. Recently many researches on IBE (identify based encrypt) have been carried out to solve these security problems. However, because of IBE's fatal flaws and great advantages of PKI (Public Key Infrastructure), PKI is found to be still irreplaceable especially in the applications based on smart card. In this paper, a construction of security Email is presented, then the design of relatively cryptography algorithms and the configuration of certificates are elaborated, and finally the security for the proposed system is discussed.

  10. Diagnostic classification of cancer using DNA microarrays and artificial intelligence.

    PubMed

    Greer, Braden T; Khan, Javed

    2004-05-01

    The application of artificial intelligence (AI) to microarray data has been receiving much attention in recent years because of the possibility of automated diagnosis in the near future. Studies have been published predicting tumor type, estrogen receptor status, and prognosis using a variety of AI algorithms. The performance of intelligent computing decisions based on gene expression signatures is in some cases comparable to or better than the current clinical decision schemas. The goal of these tools is not to make clinicians obsolete, but rather to give clinicians one more tool in their armamentarium to accurately diagnose and hence better treat cancer patients. Several such applications are summarized in this chapter, and some of the common pitfalls are noted.

  11. Determination of atmospheric moisture structure and infrared cooling rates from high resolution MAMS radiance data

    NASA Technical Reports Server (NTRS)

    Menzel, W. Paul; Moeller, Christopher C.; Smith, William L.

    1991-01-01

    This program has applied Multispectral Atmospheric Mapping Sensor (MAMS) high resolution data to the problem of monitoring atmospheric quantities of moisture and radiative flux at small spatial scales. MAMS, with 100-m horizontal resolution in its four infrared channels, was developed to study small scale atmospheric moisture and surface thermal variability, especially as related to the development of clouds, precipitation, and severe storms. High-resolution Interferometer Sounder (HIS) data has been used to develop a high spectral resolution retrieval algorithm for producing vertical profiles of atmospheric temperature and moisture. The results of this program are summarized and a list of publications resulting from this contract is presented. Selected publications are attached as an appendix.

  12. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  13. Atomization simulations using an Eulerian-VOF-Lagrangian method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Chen, C. P.

    1994-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservations are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present innovative approach by simulating benchmark problems including the coaxial jet atomization.

  14. FUSE: a profit maximization approach for functional summarization of biological networks.

    PubMed

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes; Yu, Hanry

    2012-03-21

    The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI) using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator) that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL) principle to maximize information gain of the summary graph while satisfying the level of detail constraint. We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.

  15. Force Modeling, Identification, and Feedback Control of Robot-Assisted Needle Insertion: A Survey of the Literature

    PubMed Central

    Xie, Yu; Liu, Shuang; Sun, Dong

    2018-01-01

    Robot-assisted surgery is of growing interest in the surgical and engineering communities. The use of robots allows surgery to be performed with precision using smaller instruments and incisions, resulting in shorter healing times. However, using current technology, an operator cannot directly feel the operation because the surgeon-instrument and instrument-tissue interaction force feedbacks are lost during needle insertion. Advancements in force feedback and control not only help reduce tissue deformation and needle deflection but also provide the surgeon with better control over the surgical instruments. The goal of this review is to summarize the key components surrounding the force feedback and control during robot-assisted needle insertion. The literature search was conducted during the middle months of 2017 using mainstream academic search engines with a combination of keywords relevant to the field. In total, 166 articles with valuable contents were analyzed and grouped into five related topics. This survey systemically summarizes the state-of-the-art force control technologies for robot-assisted needle insertion, such as force modeling, measurement, the factors that influence the interaction force, parameter identification, and force control algorithms. All studies show force control is still at its initial stage. The influence factors, needle deflection or planning remain open for investigation in future. PMID:29439539

  16. Force Modeling, Identification, and Feedback Control of Robot-Assisted Needle Insertion: A Survey of the Literature.

    PubMed

    Yang, Chongjun; Xie, Yu; Liu, Shuang; Sun, Dong

    2018-02-12

    Robot-assisted surgery is of growing interest in the surgical and engineering communities. The use of robots allows surgery to be performed with precision using smaller instruments and incisions, resulting in shorter healing times. However, using current technology, an operator cannot directly feel the operation because the surgeon-instrument and instrument-tissue interaction force feedbacks are lost during needle insertion. Advancements in force feedback and control not only help reduce tissue deformation and needle deflection but also provide the surgeon with better control over the surgical instruments. The goal of this review is to summarize the key components surrounding the force feedback and control during robot-assisted needle insertion. The literature search was conducted during the middle months of 2017 using mainstream academic search engines with a combination of keywords relevant to the field. In total, 166 articles with valuable contents were analyzed and grouped into five related topics. This survey systemically summarizes the state-of-the-art force control technologies for robot-assisted needle insertion, such as force modeling, measurement, the factors that influence the interaction force, parameter identification, and force control algorithms. All studies show force control is still at its initial stage. The influence factors, needle deflection or planning remain open for investigation in future.

  17. Communicability across evolving networks.

    PubMed

    Grindrod, Peter; Parsons, Mark C; Higham, Desmond J; Estrada, Ernesto

    2011-04-01

    Many natural and technological applications generate time-ordered sequences of networks, defined over a fixed set of nodes; for example, time-stamped information about "who phoned who" or "who came into contact with who" arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time's arrow is captured naturally through the noncommutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.

  18. Hybrid fuzzy cluster ensemble framework for tumor clustering from biomolecular data.

    PubMed

    Yu, Zhiwen; Chen, Hantao; You, Jane; Han, Guoqiang; Li, Le

    2013-01-01

    Cancer class discovery using biomolecular data is one of the most important tasks for cancer diagnosis and treatment. Tumor clustering from gene expression data provides a new way to perform cancer class discovery. Most of the existing research works adopt single-clustering algorithms to perform tumor clustering is from biomolecular data that lack robustness, stability, and accuracy. To further improve the performance of tumor clustering from biomolecular data, we introduce the fuzzy theory into the cluster ensemble framework for tumor clustering from biomolecular data, and propose four kinds of hybrid fuzzy cluster ensemble frameworks (HFCEF), named as HFCEF-I, HFCEF-II, HFCEF-III, and HFCEF-IV, respectively, to identify samples that belong to different types of cancers. The difference between HFCEF-I and HFCEF-II is that they adopt different ensemble generator approaches to generate a set of fuzzy matrices in the ensemble. Specifically, HFCEF-I applies the affinity propagation algorithm (AP) to perform clustering on the sample dimension and generates a set of fuzzy matrices in the ensemble based on the fuzzy membership function and base samples selected by AP. HFCEF-II adopts AP to perform clustering on the attribute dimension, generates a set of subspaces, and obtains a set of fuzzy matrices in the ensemble by performing fuzzy c-means on subspaces. Compared with HFCEF-I and HFCEF-II, HFCEF-III and HFCEF-IV consider the characteristics of HFCEF-I and HFCEF-II. HFCEF-III combines HFCEF-I and HFCEF-II in a serial way, while HFCEF-IV integrates HFCEF-I and HFCEF-II in a concurrent way. HFCEFs adopt suitable consensus functions, such as the fuzzy c-means algorithm or the normalized cut algorithm (Ncut), to summarize generated fuzzy matrices, and obtain the final results. The experiments on real data sets from UCI machine learning repository and cancer gene expression profiles illustrate that 1) the proposed hybrid fuzzy cluster ensemble frameworks work well on real data sets, especially biomolecular data, and 2) the proposed approaches are able to provide more robust, stable, and accurate results when compared with the state-of-the-art single clustering algorithms and traditional cluster ensemble approaches.

  19. Land Surface Temperature Measurements from EOS MODIS Data

    NASA Technical Reports Server (NTRS)

    Wan, Zheng-Ming

    2004-01-01

    This report summarizes the accomplishments made by the MODIS LST (Land-Surface Temperature) group at University of California, Santa Barbara, under NASA Contract. Version 1 of the MODIS Land-Surface Temperature Algorithm Theoretical Basis Document (ATBD) was reviewed in June 1994, version 2 reviewed in November 1994, version 3.1 in August 1996, and version 3.3 updated in April 1999. Based on the ATBD, two LST algorithms were developed, one is the generalized split-window algorithm and another is the physics-based day/night LST algorithm. These two LST algorithms were implemented into the production generation executive code (PGE 16) for the daily standard MODIS LST products at level-2 (MODII-L2) and level-3 (MODIIA1 at 1 km resolution and MODIIB1 at 5km resolution). PGE codes for 8-day 1 km LST product (MODIIA2) and the daily, 8-day and monthly LST products at 0.05 degree latitude/longitude climate model grids (CMG) were also delivered. Four to six field campaigns were conducted each year since 2000 to validate the daily LST products generated by PGE16 and the calibration accuracies of the MODIS TIR bands used for the LST/emissivity retrieval from versions 2-4 of Terra MODIS data and versions 3-4 of Aqua MODIS data. Validation results from temperature-based and radiance-based methods indicate that the MODIS LST accuracy is better than 1 C in most clear-sky cases in the range from -10 to 58 C. One of the major lessons learn from multi- year temporal analysis of the consistent V4 daily Terra MODIS LST products in 2000-2003 over some selected target areas including lakes, snow/ice fields, and semi-arid sites is that there are variable numbers of cloud-contaminated LSTs in the MODIS LST products depending on surface elevation, land cover types, and atmospheric conditions. A cloud-screen scheme with constraints on spatial and temporal variations in LSTs was developed to remove cloud-contaminated LSTs. The 5km LST product was indirectly validated through comparisons to the 1 km LST product. Twenty three papers related to the LST research work were published in journals over the last decade.

  20. Detecting brain dynamics during resting state: a tensor based evolutionary clustering approach

    NASA Astrophysics Data System (ADS)

    Al-sharoa, Esraa; Al-khassaweneh, Mahmood; Aviyente, Selin

    2017-08-01

    Human brain is a complex network with connections across different regions. Understanding the functional connectivity (FC) of the brain is important both during resting state and task; as disruptions in connectivity patterns are indicators of different psychopathological and neurological diseases. In this work, we study the resting state functional connectivity networks (FCNs) of the brain from fMRI BOLD signals. Recent studies have shown that FCNs are dynamic even during resting state and understanding the temporal dynamics of FCNs is important for differentiating between different conditions. Therefore, it is important to develop algorithms to track the dynamic formation and dissociation of FCNs of the brain during resting state. In this paper, we propose a two step tensor based community detection algorithm to identify and track the brain network community structure across time. First, we introduce an information-theoretic function to reduce the dynamic FCN and identify the time points that are similar topologically to combine them into a tensor. These time points will be used to identify the different FC states. Second, a tensor based spectral clustering approach is developed to identify the community structure of the constructed tensors. The proposed algorithm applies Tucker decomposition to the constructed tensors and extract the orthogonal factor matrices along the connectivity mode to determine the common subspace within each FC state. The detected community structure is summarized and described as FC states. The results illustrate the dynamic structure of resting state networks (RSNs), including the default mode network, somatomotor network, subcortical network and visual network.

  1. Using TRMM Data To Understand Interannual Variations In the Tropical Water Balance

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Fitzjarrald, Dan; Arnold, James E. (Technical Monitor)

    2002-01-01

    A significant element of the science rationale for TRMM centered on assembling rainfall data needed to validate climate models-- climatological estimates of precipitation, its spatial and temporal variability, and vertical modes of latent heat release. Since the launch of TRMM, a great interest in the science community has emerged for quantifying interannual variability (IAV) of precipitation and its relationship to sea-surface temperature (SST) changes. The fact that TRMM has sampled one strong warm/ cold ENSO couplet, together with the prospect for a mission lifetime approaching ten years, has bolstered this interest in these longer time scales. Variability on a regional basis as well as for the tropics as a whole is of concern. Our analysis of TRMM results so far has shown surprising lack of concordance between various algorithms in quantifying IAV of precipitation. The first objective of this talk is to quantify the sensitivity of tropical precipitation to changes in SSTs. We analyze performance of the 3A11, 3A25, and 3B31 algorithms and investigate their relationship to scattering-- based algorithms constructed from SSM/I and TRMM 85 kHz data. The physical basis for the differences (and similarities) in depicting tropical oceanic and land rainfall will be discussed. We argue that scattering-based estimates of variability constitute a useful upper bound for precipitation variations. These results lead to the second question addressed in this talk-- How do TRMM precipitation / SST sensitivities compare to estimates of oceanic evaporation and what are the implications of these uncertainties in determining interannual changes in large-scale moisture transport? We summarize results of an analysis performed using COADS data supplemented by SSM/I estimates of near-surface variables to assess evaporation sensitivity to SST. The response of near 5 W sq m/K is compared to various TRMM precipitation sensitivities. Implied moisture convergence over the tropics and its sensitivity to errors of these algorithms is discussed.

  2. Porcelain surface alterations and refinishing after use of two orthodontic bonding methods.

    PubMed

    Herion, Drew T; Ferracane, Jack L; Covell, David A

    2010-01-01

    To compare porcelain surfaces at debonding after use of two surface preparation methods and to evaluate a method for restoring the surface. Lava Ceram feldspathic porcelain discs (n = 40) underwent one of two surface treatments prior to bonding orthodontic brackets. Half the discs had sandblasting, hydrofluoric acid, and silane (SB + HF + S), and the other half, phosphoric acid and silane (PA + S). Brackets were debonded using bracket removing pliers, and resin was removed with a 12-fluted carbide bur. The surface was refinished using a porcelain polishing kit, followed by diamond polishing paste. Measurements for surface roughness (Ra), gloss, and color were made before bonding (baseline), after debonding, and after each step of refinishing. Surfaces were also examined by scanning electron microscopy (SEM). Data was analyzed with 2-way ANOVA followed by Tukey HSD tests (alpha = 0.05). The SB + HF + S bonding method increased Ra (0.160 to 1.121 microm), decreased gloss (41.3 to 3.7) and altered color (DeltaE = 4.37; P < .001). The PA + S method increased Ra (0.173 to 0.341 microm; P < .001), but the increase in Ra was significantly less than that caused by the SB + HF + S bonding method (P < . 001). The PA + S method caused insignificant changes in gloss (41.7 to 38.0) and color (DeltaE = 0.50). The measurements and SEM observations showed that changes were fully restored to baseline with refinishing. The PA + S method caused significantly less damage to porcelain than the SB + HF + S method. The refinishing protocol fully restored the porcelain surfaces.

  3. Pulpal response in electrothermal debonding.

    PubMed

    Takla, P M; Shivapuja, P K

    1995-12-01

    An alternative method to conventional bracket removal that minimizes the potential for ceramic bracket failure as well as trauma to the enamel surface is electrothermal debonding (ETD). However, the potential for pulpal damage using ETD on ceramic brackets still needs assessment. The purpose of this research is to investigate and assess any pulpal damage caused by ETD. Ten patients requiring four premolar extractions each were randomly selected (5 boys and 5 girls). Ceramic brackets were bonded to experimental and control teeth. A total of 30 teeth were used to provide histologic material of the human pulp. Fifteen teeth were extracted 24 hours after ETD, seven were extracted 28 to 32 days after ETD, and eight were the control teeth and debonded by a conventional method, with pliers. The pulp was normal in most cases in the control group. There was significant hyperemia seen 24 hours after debonding in teeth debonded by ETD. Teeth extracted 30 days afer ETD showed varied responses, ranging from complete recovery in some cases to persistence of inflammation and pulpal fibrosis. Teeth subjected to the conventional debonding were normal histologically. The teeth in our research were healthy teeth with a rich blood supply and were from a younger age group. Patients with compromised teeth that have large restorations or a questionable pulpal status could behave more adversely to this significant amount of heat applied. In compromised cases and on older patients, performing pulp vitality tests before ETD may inform the operator about the status of the pulp and thereby prevent the potential for pulpal damage.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Comparative transcriptomics indicate changes in cell wall organization and stress response in seedlings during spaceflight.

    PubMed

    Johnson, Christina M; Subramanian, Aswati; Pattathil, Sivakumar; Correll, Melanie J; Kiss, John Z

    2017-08-21

    Plants will play an important role in the future of space exploration as part of bioregenerative life support. Thus, it is important to understand the effects of microgravity and spaceflight on gene expression in plant development. We analyzed the transcriptome of Arabidopsis thaliana using the Biological Research in Canisters (BRIC) hardware during Space Shuttle mission STS-131. The bioinformatics methods used included RMA (robust multi-array average), MAS5 (Microarray Suite 5.0), and PLIER (probe logarithmic intensity error estimation). Glycome profiling was used to analyze cell wall composition in the samples. In addition, our results were compared to those of two other groups using the same hardware on the same mission (BRIC-16). In our BRIC-16 experiments, we noted expression changes in genes involved in hypoxia and heat shock responses, DNA repair, and cell wall structure between spaceflight samples compared to the ground controls. In addition, glycome profiling supported our expression analyses in that there was a difference in cell wall components between ground control and spaceflight-grown plants. Comparing our studies to those of the other BRIC-16 experiments demonstrated that, even with the same hardware and similar biological materials, differences in results in gene expression were found among these spaceflight experiments. A common theme from our BRIC-16 space experiments and those of the other two groups was the downregulation of water stress response genes in spaceflight. In addition, all three studies found differential regulation of genes associated with cell wall remodeling and stress responses between spaceflight-grown and ground control plants. © 2017 Botanical Society of America.

  5. Advances in metaheuristics for gene selection and classification of microarray data.

    PubMed

    Duval, Béatrice; Hao, Jin-Kao

    2010-01-01

    Gene selection aims at identifying a (small) subset of informative genes from the initial data in order to obtain high predictive accuracy for classification. Gene selection can be considered as a combinatorial search problem and thus be conveniently handled with optimization methods. In this article, we summarize some recent developments of using metaheuristic-based methods within an embedded approach for gene selection. In particular, we put forward the importance and usefulness of integrating problem-specific knowledge into the search operators of such a method. To illustrate the point, we explain how ranking coefficients of a linear classifier such as support vector machine (SVM) can be profitably used to reinforce the search efficiency of Local Search and Evolutionary Search metaheuristic algorithms for gene selection and classification.

  6. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  7. Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology

    NASA Astrophysics Data System (ADS)

    Morgan, T. W.; Thurgood, R. L.

    1984-05-01

    This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.

  8. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  9. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  10. Temperature and melt solid interface control during crystal growth

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1990-01-01

    Findings on the adaptive control of a transparent Bridgman crystal growth furnace are summarized. The task of the process controller is to establish a user specified axial temperature profile by controlling the temperatures in eight heating zones. The furnace controller is built around a computer. Adaptive PID (Proportional Integral Derivative) and Pole Placement control algorithms are applied. The need for adaptive controller stems from the fact that the zone dynamics changes with respect to time. The controller was tested extensively on the Lead Bromide crystal growth. Several different temperature profiles and ampoule's translational rates are tried. The feasibility of solid liquid interface quantification by image processing was determined. The interface is observed by a color video camera and the image data file is processed to determine if the interface is flat, convex or concave.

  11. Clustering of Variables for Mixed Data

    NASA Astrophysics Data System (ADS)

    Saracco, J.; Chavent, M.

    2016-05-01

    This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.

  12. [G-CSF (Neupogen Roche) in the treatment of patients with chronic aplastic anemia with severe neutropenia].

    PubMed

    Novotný, J; Zvarová, M; Prazáková, L; Jandlová, M; Konvicková, L

    1995-10-01

    Aplastic anaemia (AA) of the chronic type with severe cytopenia is very frequently a difficult therapeutic problem. Patients with granulocyte values below 0.5 G/l are threatened by infections, incl. sepsis possibly with a fatal outcome. If the pool of stem cells for granulocytes is not completely exhausted and can respond to growth factors, these patients can be treated either chronically and/or in risk situations (e.g. injury, surgery) with preparations of the type of a recombinant, granulocyte colony stimulating factor (rhG-CSF), or granulocyte and monocyte colony stimulating factor (rhGM-CSF). The authors present a review of diagnostic and therapeutic algorithms in patients with the AA syndrome and summarize their own experience with the preparation Neupogen Roche (rhG-CSF).

  13. Multiphysics Simulations: Challenges and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keyes, David; McInnes, Lois C.; Woodward, Carol

    2013-02-12

    We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose somemore » commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.« less

  14. Illicit Cosmetic Silicone Injection: A Recent Reiteration of History.

    PubMed

    Leonardi, Nicholas R; Compoginis, John M; Luce, Edward A

    2016-10-01

    The injection of liquid silicone for cosmetic augmentation has a history of both legal as well as illicit practice in the United States and worldwide. Recently, the American Society of Plastic Surgeons has launched a public awareness campaign through patient stories and various statements in response to the rise in deaths related to this illicit practice. A articular segment of the population that has become a target is the transgender patient group. A brief review is provided of the history of industrial liquid silicone injection, including the pathophysiology to fully describe and review silicone injection injury. Three cases of soft tissue cellulitis and wound necrosis treated at our institution are summarized and a treatment algorithm proposed based on literature review of treatment options and our own experience.

  15. State of the art on nuclear heating in a mixed (n/γ) field in research reactors

    NASA Astrophysics Data System (ADS)

    Amharrak, H.; Salvo, J. Di; Lyoussi, A.; Carette, M.; Reynard-Carette, C.

    2014-06-01

    This article aims at inventorying the knowledge on nuclear heating measurements in a mixed (n,γ) field in low-power research reactors using ThermoLuminescent Detectors (TLDs), Optically Stimulated Luminescent Detectors (OSLDs) and Ionization Chambers. The difficulty in measuring a mixed (n,γ) field in a reactor configuration lies in quantifying the contribution of the gamma photons and neutrons to the full signal measured by these detectors. The algorithms and experimental protocols developed together with the calculation methods used to assess the contribution of the neutron dose to the total integrated dose as measured by these detectors will be described in this article. This 'inventory' will be used to summarize the best methods to be used in relation to the requirements.

  16. Guidance, Navigation, and Control Technology Assessment for Future Planetary Science Missions

    NASA Technical Reports Server (NTRS)

    Beauchamp, Pat; Cutts, James; Quadrelli, Marco B.; Wood, Lincoln J.; Riedel, Joseph E.; McHenry, Mike; Aung, MiMi; Cangahuala, Laureano A.; Volpe, Rich

    2013-01-01

    Future planetary explorations envisioned by the National Research Council's (NRC's) report titled Vision and Voyages for Planetary Science in the Decade 2013-2022, developed for NASA Science Mission Directorate (SMD) Planetary Science Division (PSD), seek to reach targets of broad scientific interest across the solar system. This goal requires new capabilities such as innovative interplanetary trajectories, precision landing, operation in close proximity to targets, precision pointing, multiple collaborating spacecraft, multiple target tours, and advanced robotic surface exploration. Advancements in Guidance, Navigation, and Control (GN&C) and Mission Design in the areas of software, algorithm development and sensors will be necessary to accomplish these future missions. This paper summarizes the key GN&C and mission design capabilities and technologies needed for future missions pursuing SMD PSD's scientific goals.

  17. Diagnosis and management of carotid stenosis: a review.

    PubMed

    Nussbaum, E S

    2000-01-01

    Since its introduction in the 1950s, carotid endarterectomy has become one of the most frequently performed operations in the United States. The tremendous appeal of a procedure that decreases the risk of stroke, coupled with the large number of individuals in the general population with carotid stenosis, has contributed to its popularity. To provide optimal patient care, the practicing physician must have a firm understanding of the proper evaluation and management of carotid stenosis. Nevertheless, because of the large number of clinical trials performed over the last decade addressing the treatment of stroke and carotid endarterectomy, the care of patients with carotid stenosis remains a frequently misunderstood topic. This review summarizes the current evaluation and treatment options for carotid stenosis and provides a rational management algorithm for this prevalent disease process.

  18. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  19. A statistical physics perspective on alignment-independent protein sequence comparison.

    PubMed

    Chattopadhyay, Amit K; Nasiev, Diar; Flower, Darren R

    2015-08-01

    Within bioinformatics, the textual alignment of amino acid sequences has long dominated the determination of similarity between proteins, with all that implies for shared structure, function and evolutionary descent. Despite the relative success of modern-day sequence alignment algorithms, so-called alignment-free approaches offer a complementary means of determining and expressing similarity, with potential benefits in certain key applications, such as regression analysis of protein structure-function studies, where alignment-base similarity has performed poorly. Here, we offer a fresh, statistical physics-based perspective focusing on the question of alignment-free comparison, in the process adapting results from 'first passage probability distribution' to summarize statistics of ensemble averaged amino acid propensity values. In this article, we introduce and elaborate this approach. © The Author 2015. Published by Oxford University Press.

  20. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

Top