A Tutorial on Parallel and Concurrent Programming in Haskell
NASA Astrophysics Data System (ADS)
Peyton Jones, Simon; Singh, Satnam
This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.
ERIC Educational Resources Information Center
Warrington, Jacinta
2017-01-01
Haskell Indian Nations University opened 133 years ago, on September 17, 1884, as the U.S. Training and Industrial School--one of three original tribal boarding schools funded by the United States Congress. Three years later the school changed its name to Haskell Institute in honor of Chase Dudley Haskell, a U.S. representative from the Second…
76 FR 8788 - Riverside Casualty, Inc.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
.../search/search.htm or by calling (202) 551-8090. Applicant's Representations 1. The Haskell Company (``THC..., construction, real estate and facility management services. All of the outstanding shares of THC's common stock are owned by The Haskell Company Employee Stock Ownership Trust (``THC ESOP''); Preston H. Haskell III...
Book Learning and Life Lessons: Chris Sindone of Haskell Indian Nations University
ERIC Educational Resources Information Center
Sorensen, Barbara Ellen
2017-01-01
American Indian Higher Education (AIHEC) Student Congress president Chris Sindone (Pawnee of Oklahoma) was headed down a rough road, until Haskell Indian Nations University helped turn his life around. This profile describes Sindone's path to Haskell, highlights his successes and influences, as well as his plans for the future.
Ask-Elle: An Adaptable Programming Tutor for Haskell Giving Automated Feedback
ERIC Educational Resources Information Center
Gerdes, Alex; Heeren, Bastiaan; Jeuring, Johan; van Binsbergen, L. Thomas
2017-01-01
Ask-Elle is a tutor for learning the higher-order, strongly-typed functional programming language Haskell. It supports the stepwise development of Haskell programs by verifying the correctness of incomplete programs, and by providing hints. Programming exercises are added to Ask-Elle by providing a task description for the exercise, one or more…
2. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
2. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (b) EXT.-FRONT, LOOKING NORTHWEST - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
3. Historic American Buildings Survey Arthur C. Haskell, Photographer (c) ...
3. Historic American Buildings Survey Arthur C. Haskell, Photographer (c) EXT.- PART OF FRONT ELEVATION, LOOKING NORTHEAST - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
1. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. ...
1. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. 15, 1938 (a) EXT.- FRONT & SIDE, LOOKING NORTHEAST - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
2. Historic American Buildings Survey Arthur C. Haskell, Photographer May ...
2. Historic American Buildings Survey Arthur C. Haskell, Photographer May 17, 1938 (b) EXT.- FRONT & SIDE, LOOKING NORTHWEST - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1937 ...
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1937 (From snapshot made by Survey employee.) (b) Ext- Main building, south end. - Pollard Tavern, Great Road, Bedford, Middlesex County, MA
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1937 ...
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1937 (From snapshot made by Survey Employee.) (a) Ext- General view from Southeast. - Pollard Tavern, Great Road, Bedford, Middlesex County, MA
3. Historic American Buildings Survey Arthur C. Haskell, Photographer Mar. ...
3. Historic American Buildings Survey Arthur C. Haskell, Photographer Mar. 30, 1939 (c) EXT.- FRONT & SIDE, LOOKING NORTHEAST - M.I.T., Rogers Building, 491 Boylston Street, Boston, Suffolk County, MA
8. Historic American Buildings Survey, Arthur C. Haskell, Photographer. July ...
8. Historic American Buildings Survey, Arthur C. Haskell, Photographer. July 1934. (i) Paneled fireplace end, S.E. Room, second floor. - Captain Samuel Trevett House, 65 Washington Street, Marblehead, Essex County, MA
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (i) INT.- MANTEL, SOUTHEAST ROOM, 2nd. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
7. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
7. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (g) INT.- MANTEL, SOUTHWEST ROOM, 1st. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer 1936 ...
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer 1936 (g) Int- Paneled fireplace wall, Room A, (Dining Room) S.W. Corner. - Jabez Wilder House, Main Street, Hingham, Plymouth County, MA
10. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
10. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (j) INT.-MANTEL, SOUTHWEST ROOM, 2nd. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
1. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
1. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (a) EXT.-VIEW OF FRONT, LOOKING NORTHWEST - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
13. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
13. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (l) INT.-PANELED WALL & FIREPLACE, GUEST HOUSE - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
6. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 ...
6. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 (b) Ext-View of remaining East Portion from Atlantic Ave. - India Wharf Stores, 306-308 Atlantic Avenue, Boston, Suffolk County, MA
12. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
12. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 3, 1938 (k) INT.- MANTLE, NORTHEAST ROOM, 2nd. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
11. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1936 ...
11. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1936 (k) Int- Paneled fireplace end, southeast room, first floor. - Squire William Sever House, 2 Linden Street, Kingston, Plymouth County, MA
5. Historic American Buildings Survey Arthur C. Haskell, Photographer May ...
5. Historic American Buildings Survey Arthur C. Haskell, Photographer May 17, 1938 (d) EXT.- FRONT ENTRANCE PORCH, LOOKING NORTHWEST - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
4. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. ...
4. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. 15, 1938 (e) EXT.- FRONT ENTRANCE PORCH, LOOKING WEST - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
6. Historic American Buildings Survey Arthur C. Haskell, Photographer May ...
6. Historic American Buildings Survey Arthur C. Haskell, Photographer May 17, 1938 (f) EXT.- DETAILS OF FRONT ENTRANCE PORCH - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. ...
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. 15, 1939 (n) INT.- MANTEL, SOUTHWEST ROOM, 1st. FLOOR - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
2. Historic American Buildings Survey Arthur C. Haskell, Photographer October, ...
2. Historic American Buildings Survey Arthur C. Haskell, Photographer October, 1934 (c) GENERAL DETAIL OF LAMP STANDARD, AND PORCH FROM WEST - Iron Standard & Gate, Tremont Place, Boston, Suffolk County, MA
6. Historic American Buildings Survey Arthur C. Haskell, Photographer October, ...
6. Historic American Buildings Survey Arthur C. Haskell, Photographer October, 1934 (d) TICKNOR HOUSE LAMP STANDARD AND RAILING FROM NORTHWEST - Amory-Ticknor House, 9 Park Street, Boston, Suffolk County, MA
1. Historic American Buildings Survey Arthur C. Haskell, Photographer October, ...
1. Historic American Buildings Survey Arthur C. Haskell, Photographer October, 1934 (b) LAMP STANDARD, GATE AND RAILING, TREMONT PLACE, FROM NORTHWEST - Iron Standard & Gate, Tremont Place, Boston, Suffolk County, MA
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Apr. ...
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Apr. 1, 1939 (i) INT.- STAIR HALL, 1st. FLOOR, LOOKING NORTH - M.I.T., Rogers Building, 491 Boylston Street, Boston, Suffolk County, MA
11. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
11. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (m) INT.-WALL PAPER, SOUTHWEST ROOM, 2nd. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
8. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. ...
8. Historic American Buildings Survey Arthur C. Haskell, Photographer Aug. 30, 1938 (h) INT.- NORTH WALL, SOUTHEAST ROOM, 1st. FLOOR - Captain William Wildes House, 872 Commercial Street, Weymouth, Norfolk County, MA
7. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 ...
7. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 (c) Ext-General view remaining corner from S.W. corner Storer Street. - India Wharf Stores, 306-308 Atlantic Avenue, Boston, Suffolk County, MA
12. Historic American Buildings Survey Arthur C. Haskell, Photographer Apr. ...
12. Historic American Buildings Survey Arthur C. Haskell, Photographer Apr. 1, 1939 (l) INT.- STAIRWAY, 4th FLOOR, LOOKING SOUTH - M.I.T., Rogers Building, 491 Boylston Street, Boston, Suffolk County, MA
10. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. ...
10. Historic American Buildings Survey Arthur C. Haskell, Photographer Feb. 15, 1938 (o) INT.- WALL PAPER, SOUTHEAST ROOM, 1st. FLOOR - Major Israel Forster House, State Route 127, Manchester, Essex County, MA
8. Historic American Buildings Survey Arthur C. Haskell, Photographer October, ...
8. Historic American Buildings Survey Arthur C. Haskell, Photographer October, 1934 (e) DETAIL VIEW OF TICKNOR HOUSE STANDARD AND STEPS FROM N.W. - Amory-Ticknor House, 9 Park Street, Boston, Suffolk County, MA
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (a) Ext- General front view from southeast. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA
3. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...
3. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (c) Ext-Detail entrance on south. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA
4. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...
4. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. (e) Int-Staircase from Jonathan Watson House, formerly on High St., Medford. - Colonel Isaac Royall Slave Quarters, 15 George Street, Medford, Middlesex County, MA
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. (k) Int-Detail of Corner Fireplace in Parlor (Living) (SE) Room, First Floor. - Daniel Shute House, Main & South Pleasant Streets, Hingham, Plymouth County, MA
8. Historic American Buildings Survey, Arthur C. Haskell, Photographer, Dec. ...
8. Historic American Buildings Survey, Arthur C. Haskell, Photographer, Dec. 1934. (d) After-- Corner Birch and Summer Sts., looking Northwest. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
7. Historic American Buildings Survey, Arthur C. Haskell, Photographer. April, ...
7. Historic American Buildings Survey, Arthur C. Haskell, Photographer. April, 1934. (d) Before--Corner Birch and Summer Sts. looking Northwest. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer. April, ...
9. Historic American Buildings Survey, Arthur C. Haskell, Photographer. April, 1934. (e) Before- Looking south along Summer St. from Merrimack St. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
10. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. ...
10. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. 1934. (e) After-- Looking south along Summer St. from Merrimack St. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. ...
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. 1935. From snapshot made by a Survey employee. (b) Ext- General view rear, looking from north. - Lucy Gray House, Indian Hill Road, North Tisbury, Dukes County, MA
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Nov. ...
9. Historic American Buildings Survey Arthur C. Haskell, Photographer Nov. 2, 1937 (i) INT.- SOUTHEAST WALL, WEST ROOM, 1st. FLOOR - General Joseph Dwight House, U.S. Route 7 & State Route 23, Great Barrington, Berkshire County, MA
8. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 ...
8. Historic American Buildings Survey Arthur C. Haskell, Photographer. 1935 (d) Ext-Detail view S.W. Corner of remaining portion of old building. Corner Storer Street. - India Wharf Stores, 306-308 Atlantic Avenue, Boston, Suffolk County, MA
4. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. ...
4. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. 1934. (b) After- View westerly from Summer St. between Washington and Birch Sts. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
3. Historic American Buildings Survey, Arthur C. Haskell, Photographer, April, ...
3. Historic American Buildings Survey, Arthur C. Haskell, Photographer, April, 1934. (b) Before--View westerly from Summer St. between Washington and Birch Sts. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
8. ORIGINAL HELIUM COMPRESSOR, CIRCA 1957, BY HASKELL ENGINEERING, GLENDALE, ...
8. ORIGINAL HELIUM COMPRESSOR, CIRCA 1957, BY HASKELL ENGINEERING, GLENDALE, CALIFORNIA. Looking north. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Helium Compression Plant, Test Area 1-115, intersection of Altair & Saturn Boulevards, Boron, Kern County, CA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
... Four Federal Coal Lease Applications in Haskell and LeFlore Counties, OK; Correction AGENCY: Bureau of... Four Federal Coal Lease Applications in Haskell and LeFlore Counties, Oklahoma. The notice omitted a...
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer April, ...
1. Historic American Buildings Survey, Arthur C. Haskell, Photographer April, 1934 (a) Before- Summer St. looking south from between Washington and Birch Streets. (Stockman House) - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
5. Historic American Buildings Survey, Arthur C. Haskell, Photographer April, ...
5. Historic American Buildings Survey, Arthur C. Haskell, Photographer April, 1934. (c) Before- View looking south along Summer St. from corner of Washington St - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. ...
2. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. 1934. (b) After- view of Summer St. looking south from between Washington and Birch Streets. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
6. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. ...
6. Historic American Buildings Survey, Arthur C. Haskell, Photographer. Dec. 1934. (c) After- View looking south along Summer St. from corner of Washington St. - Highway Cut-off Demolition Area, Summer, Winter, High & Merrimac Streets, Newburyport, Essex County, MA
Student Handbook--Haskell Indian Junior College, Lawrence, Kansas.
ERIC Educational Resources Information Center
Haskell Indian Junior Coll., Lawrence, KS.
Designed for prospective and in-coming American Indian students, this handbook on Haskell Indian Junior College presents information relative to the following: (1) School Calender; (2) Office Directory; (3) History and Traditions (school hymn and song, historical development, and statement of school philosophy), (4) Academic Life (degree programs,…
Haskell before Haskell: Curry's Contribution to Programming (1946-1950)
NASA Astrophysics Data System (ADS)
de Mol, Liesbeth; Bullynck, Maarten; Carlé, Martin
This paper discusses Curry's work on how to implement the problem of inverse interpolation on the ENIAC (1946) and his subsequent work on developing a theory of program composition (1948-1950). It is shown that Curry anticipated automatic programming and that his logical work influenced his composition of programs.
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
A method to calculate synthetic waveforms in stratified VTI media
NASA Astrophysics Data System (ADS)
Wang, W.; Wen, L.
2012-12-01
Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.
Temporally Aware Reactive Systems
2005-03-01
programming language , as does O’Haskell. However, there are signi cant di erences. In Ocaml , state, objects and concurrency are orthogonal aspects. They...dif- ference between the two languages is that Ocaml is strict, while expression evaluation in O’Haskell is lazy. That dif- ference, however, is not... languages ; Static Checking; Overload tolerance; Graceful degradation 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON (Monitor) a
West Central Texas. Homework pays off for Originala in Haskell County
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickey, V.
1983-03-01
Originala Petroleum Corp., Fort Worth, is finding Bend Conglomerate and Caddo oil in NW Haskell County, Texas. The most encouraging find to date is the company's No. 1 June L. White, which potentialed in Sept. 1981 for 493 bopd from perforations in the Caddo at 5638-60 ft. This discovery, along with promising Bend Conglomerate drill stem test and log shows in other wells in the region, support continued exploration efforts in this geologically complex province. The key to overcoming the exploration challenges in NW. Haskell County is to depend primarily upon seismic data to give structural control. Accurate seismic interpretationmore » is only a part of the preparation, and is integrated with other geologic data-collecting methods such as gravity and structural mapping based solely on subsurface control.« less
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Extension of Alvis compiler front-end
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl
2015-12-31
Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providingmore » new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.« less
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Optimising the Parallelisation of OpenFOAM Simulations
2014-06-01
UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian
2005-01-01
This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., including (but not limited to) school or institution custodial or maintenance personnel, and whose services... Arts, and Haskell Indian Junior College, and those operated at Tribally controlled community colleges...
Improved packing of protein side chains with parallel ant colonies.
Quan, Lijun; Lü, Qiang; Li, Haiou; Xia, Xiaoyan; Wu, Hongjie
2014-01-01
The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms.
Importance of a 3D forward modeling tool for surface wave analysis methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Le Feuvre, Mathieu; Donatienne, Leparoux; Philippe, Côte; Yann, Capdeville
2016-04-01
Since a few years, seismic surface waves analysis methods (SWM) have been widely developed and tested in the context of subsurface characterization and have demonstrated their effectiveness for sounding and monitoring purposes, e.g., high-resolution tomography of the principal geological units of California or real time monitoring of the Piton de la Fournaise volcano. Historically, these methods are mostly developed under the assumption of semi-infinite 1D layered medium without topography. The forward modeling is generally based on Thomson-Haskell matrix based modeling algorithm and the inversion is driven by Monte-Carlo sampling. Given their efficiency, SWM have been transfered to several scale of which civil engineering structures in order to, e.g., determine the so-called V s30 parameter or assess other critical constructional parameters in pavement engineering. However, at this scale, many structures may often exhibit 3D surface variations which drastically limit the efficiency of SWM application. Indeed, even in the case of an homogeneous structure, 3D geometry can bias the dispersion diagram of Rayleigh waves up to obtain discontinuous phase velocity curves which drastically impact the 1D mean velocity model obtained from dispersion inversion. Taking advantages of high-performance computing center accessibility and wave propagation modeling algorithm development, it is now possible to consider the use of a 3D elastic forward modeling algorithm instead of Thomson-Haskell method in the SWM inversion process. We use a parallelized 3D elastic modeling code based on the spectral element method which allows to obtain accurate synthetic data with very low numerical dispersion and a reasonable numerical cost. In this study, we choose dike embankments as an illustrative example. We first show that their longitudinal geometry may have a significant effect on dispersion diagrams of Rayleigh waves. Then, we demonstrate the necessity of 3D elastic modeling as a forward problem for the inversion of dispersion curves.
Improved packing of protein side chains with parallel ant colonies
2014-01-01
Introduction The accurate packing of protein side chains is important for many computational biology problems, such as ab initio protein structure prediction, homology modelling, and protein design and ligand docking applications. Many of existing solutions are modelled as a computational optimisation problem. As well as the design of search algorithms, most solutions suffer from an inaccurate energy function for judging whether a prediction is good or bad. Even if the search has found the lowest energy, there is no certainty of obtaining the protein structures with correct side chains. Methods We present a side-chain modelling method, pacoPacker, which uses a parallel ant colony optimisation strategy based on sharing a single pheromone matrix. This parallel approach combines different sources of energy functions and generates protein side-chain conformations with the lowest energies jointly determined by the various energy functions. We further optimised the selected rotamers to construct subrotamer by rotamer minimisation, which reasonably improved the discreteness of the rotamer library. Results We focused on improving the accuracy of side-chain conformation prediction. For a testing set of 442 proteins, 87.19% of X1 and 77.11% of X12 angles were predicted correctly within 40° of the X-ray positions. We compared the accuracy of pacoPacker with state-of-the-art methods, such as CIS-RR and SCWRL4. We analysed the results from different perspectives, in terms of protein chain and individual residues. In this comprehensive benchmark testing, 51.5% of proteins within a length of 400 amino acids predicted by pacoPacker were superior to the results of CIS-RR and SCWRL4 simultaneously. Finally, we also showed the advantage of using the subrotamers strategy. All results confirmed that our parallel approach is competitive to state-of-the-art solutions for packing side chains. Conclusions This parallel approach combines various sources of searching intelligence and energy functions to pack protein side chains. It provides a frame-work for combining different inaccuracy/usefulness objective functions by designing parallel heuristic search algorithms. PMID:25474164
NASA Astrophysics Data System (ADS)
Advocate, Dev L.
The matter of the viscosity of the mantle has started to become serious. In 1935, Norm Haskell estimated the viscosity to be about 1020 poise and there the matter stood for about half a century. For a little while, people worried about excess ellipticity of the Earth and attributed this to a “fossil bulge” that lagged the rotation rate. For this same little while, 1025 poise was thought to be the viscosity of the lower mantle, but then it was discovered that the equator was also out of shape by about the same amount, ruling out the “fossil bulge” idea. To cover their embarrassment, geodynamicists upped the viscosity of the mantle to 1021 by adopting S.I. (Satan's Invention) units. No one noticed for some time since it didn't really matter whether viscosity was given in stokes, poise, or pascal seconds. It was just a large number with a large uncertainty and no one had a feel for it anyway.
2007-09-17
been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave
ERIC Educational Resources Information Center
Godfrey, George; Wildcat, Daniel
2002-01-01
Describes a science and cultural exchange between Haskell Indian Nations University and Gorno Altaisk State University in the Federation of Russia. Reports that students and faculty focused on water quality and began development of a "train-the-trainers" program for sampling drinking water. (NB)
Comparing models for perfluorooctanoic acid pharmacokinetics using Bayesian analysis
Selecting the appropriate pharmacokinetic (PK) model given the available data is investigated for perfluorooctanoic acid (PFOA), which has been widely analyzed with an empirical, one-compartment model. This research examined the results of experiments [Kemper R. A., DuPont Haskel...
Acoustic Resonator Optimisation for Airborne Particle Manipulation
NASA Astrophysics Data System (ADS)
Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian
Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.
Activities commemorating John B. Herrington as first Native American astronaut
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. -- Chickasaw Nation Cultural Resources Director Haskell Alexander (left) presents a gift to Joyce and James Herrington, parents of John Herrington, mission specialist on mission STS-113. Herrington is the first Native American to be going into space.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
On the Computer Generation of Adaptive Numerical Libraries
2010-05-01
D.; Borowski, P.; Clark, T.; Clerc, D.; Dachsel, H.; Deegan , M.; Dyall, K.; Elwood, D.; Bibliography 123 Glendening, E.; Gutowski, M.; Hess, A...Science, pages 72–83. Springer, 2007. 84 Curry, Haskell B.; Feys, Robert; Craig , William. Combinatory Logic, volume 1. North-Holland Publishing
Transfer of Instructional Practices from Freedom Schools to the Classroom
ERIC Educational Resources Information Center
Stanford, Myah D.
2017-01-01
The instructional practices of three current classroom teachers who formerly served as Servant Leader Interns (SLIs) in the Children's Defense Fund Freedom Schools (CDFFS) Program were examined. Haskell ("Transfer of learning: cognition, instruction, and reasoning." Academic Press, San Diego, 2001) outlined eleven principles of transfer…
Adaptive Behavioral Outcomes: Assurance of Learning and Assessment
ERIC Educational Resources Information Center
Baker, David S.; Stewart, Geoffrey T.
2012-01-01
Business schools are currently being criticized for lacking relevance to the applied working environment in which students are supposed to be prepared to make immediate contributions and reasoned independent decisions in a fluidly changing market (Haskell and Beliveau, 2010, and Michlitsch and Sidle, 2002). While technical skills (accounting,…
CYTOKINE PROFILING FOR CHEMICAL RESPIRATORY SENSITIZERS
CYTOKINE PROFILING FOR CHEMICAL RESPIRATORY SENSITIZERS. LM Plitnick1, SE Loveless2, GS Ladics2, MP Holsapple3, MJ Selgrade4, DM Sailstad4 & RJ Smialowicz4. 1UNC, Chapel Hill, NC; 2DuPont Co., Haskell Laboratory, Newark, DE; 3Dow Chemical, Midland, MI & 4USEPA, NHEERL, RTP, NC.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
.... Scoping for the environmental assessment (EA) on use of specified genetically modified crops in... of genetically modified crops in association with the cooperative farming program was released on... assessment of using specified genetically modified crops into the CCP and determined that an environmental...
Creating Meaningful Study Abroad Programs for American Indian Postsecondary Students.
ERIC Educational Resources Information Center
Calhoon, J. Anne; Wildcat, David; Annett, Cynthia; Pierotti, Raymond; Griswold, Wendy
2003-01-01
A study-abroad exchange program for American Indian students at Haskell Indian Nations University (Kansas) and indigenous Altaian students at a Siberian university studied water quality issues common to both countries. Connectedness with the global Indigenous community was enhanced by comparing traditional knowledge. Mentoring and traveling as a…
Developing a GIS Program at a Tribal College
ERIC Educational Resources Information Center
Kostelnick, John C.; Rowley, Rex J.; McDermott, David; Bowen, Carol
2009-01-01
Programs in geographic information systems (GIS) and related areas (e.g., GPS, remote sensing) have become important additions to the curriculum at colleges and universities of all sizes and types, including tribal colleges and universities (TCUs) such as Haskell Indian Nations University. This article discusses the recent development of a GIS…
Keeping It Alive: Centers Contribute to Cultural Renaissance on College Campuses.
ERIC Educational Resources Information Center
Simonelli, Richard
2003-01-01
Describes how AIHEC's Cultural Learning Centers share the people's stories through photos, artwork, Native languages, exhibits, and gardens. Give examples of a variety of learning centers including Where The Water Stops, Omaeqnomenewak Pematesenewak, Haskell Center For Healing, and the Spirit of the Plains. Concludes the future of Cultural…
Boarding School Seasons: American Indian Families, 1900-1940.
ERIC Educational Resources Information Center
Child, Brenda J.
This book draws on hundreds of letters by students, parents, and school officials to explore American Indian, specifically Ojibwa, perspectives of the boarding school experience in the period from 1900-1940. The three institutions studied are Haskell Institute (Kansas), Flandreau School (South Dakota), and Pipestone School (Minnesota). Chapter 1…
ERIC Educational Resources Information Center
Pember, Mary Annette
2011-01-01
In response to his kindness, Roger Bollinger was exposed to an ugly side of history. Like most Americans, Bollinger was blissfully unaware of the painful story of American Indian boarding schools. A civic-minded and concerned citizen, he supports education and cultural understanding. Such sentiments moved him to donate to Haskell Indian Nations…
Language Policy: Lessons from Global Models (1st, Monterey, California, September 2001).
ERIC Educational Resources Information Center
Baker, Steven J., Ed.
These papers come from a 2001 conference that explored language policy issues at the global, U.S. national, and California regional levels. There are 15 papers in five sections. Section 1, "National Language Policy," includes (1) "Language and Globalization: Why National Policies Matter" (Chester D. Haskell) and (2) "Real…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-27
... Headquarters, Wilbur Wright Room, 55 Invernes Drive East, Englewood, Colorado, 80112, USA, John Kasten, E-mail: [email protected] , telephone (303) 328-4535, mobile (303) 260-9652. Alternate Contact, Lisa Haskell, E- mail: lisa[email protected] , telephone (303) 328-6891. FOR FURTHER INFORMATION CONTACT...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
... Oklahoma Resource Management Plan, as amended, and associated Environmental Assessment (EA) in response to... Intent To Prepare a Resource Management Plan Amendment (RMPA) and Associated Environmental Assessment Addressing Four Federal Coal Lease Applications in Haskell and LeFlore Counties, OK AGENCY: Bureau of Land...
CURRENT STATE OF PREDICTING THE RESPIRATORY ALLERGY POTENTIAL OF CHEMICALS: WHAT ARE THE ISSUES?
Current State of Predicting the Respiratory Allergy Potential of Chemicals: What Are the Issues? M I. Gilmour1 and S. E. Loveless2, 1USEPA, Research Triangle Park, NC and 2DuPont Haskell Laboratory, Newark, DE.
Many chemicals are clearly capable of eliciting immune respon...
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.
History of an Indian Library and Challenges for Today
ERIC Educational Resources Information Center
Zuber-Chall, Susan
2010-01-01
Tommaney Library at Haskell Indian Nations University has existed for more than 100 years as reflection of the struggle to assimilate Indians in America. Its history is one that mirrors that of the struggle of our indigenous people to this day. This article is about that history and how today the library manifests the dichotomy between Indians and…
Classification of six ordinary chondrites from Texas
NASA Astrophysics Data System (ADS)
Ehlmann, Arthur J.; Keil, Klaus
1988-12-01
Based on optical microscopy, modal and electron microprobe analyses, six ordinary chondrites from Texas were classified in compositional groups, petrologic types, and shock facies. These meteorites are Comanche (stone), L5c; Haskell, L5c; Deport (a), H4b; Naruna (a), H4b; Naruna (b), H4b; and Clarendon (b), H5d.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-16
... the United States of America in lands located in LeFlore and Haskell Counties, Oklahoma. DATES: This... State Office, P. O. Box 27115, Santa Fe, New Mexico 87502-0115 and Vale Exploration USA, Inc., 1209... (BLM) regulations, all interested parties are hereby invited to participate with Vale Exploration USA...
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
A novel artificial immune clonal selection classification and rule mining with swarm learning model
NASA Astrophysics Data System (ADS)
Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.
2013-06-01
Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.
Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks
NASA Astrophysics Data System (ADS)
Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang
2016-01-01
The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.
Waugh, Caryll; Cromer, Deborah; Grimm, Andrew; Chopra, Abha; Mallal, Simon; Davenport, Miles; Mak, Johnson
2015-04-09
Massive, parallel sequencing is a potent tool for dissecting the regulation of biological processes by revealing the dynamics of the cellular RNA profile under different conditions. Similarly, massive, parallel sequencing can be used to reveal the complexity of viral quasispecies that are often found in the RNA virus infected host. However, the production of cDNA libraries for next-generation sequencing (NGS) necessitates the reverse transcription of RNA into cDNA and the amplification of the cDNA template using PCR, which may introduce artefact in the form of phantom nucleic acids species that can bias the composition and interpretation of original RNA profiles. Using HIV as a model we have characterised the major sources of error during the conversion of viral RNA to cDNA, namely excess RNA template and the RNaseH activity of the polymerase enzyme, reverse transcriptase. In addition we have analysed the effect of PCR cycle on detection of recombinants and assessed the contribution of transfection of highly similar plasmid DNA to the formation of recombinant species during the production of our control viruses. We have identified RNA template concentrations, RNaseH activity of reverse transcriptase, and PCR conditions as key parameters that must be carefully optimised to minimise chimeric artefacts. Using our optimised RT-PCR conditions, in combination with our modified PCR amplification procedure, we have developed a reliable technique for accurate determination of RNA species using NGS technology.
NASA Astrophysics Data System (ADS)
Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.
2017-12-01
In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.
Resonance Phenomena in Goupillaud-type Media
2010-10-01
time-harmonic forcing function at one end, with the other end fixed. Analytical stress solutions are derived from a global system of recursion...relationships using z-transform methods, where the determinant of the resulting global system matrix |Am| in the z-space is a palindromic polynomial with real...media (35). The present treatment uses a global matrix method that is attributed to Knopoff (36), rather than the Thomsen-Haskell transfer matrix
A domain specific language for performance portable molecular dynamics algorithms
NASA Astrophysics Data System (ADS)
Saunders, William Robert; Grant, James; Müller, Eike Hermann
2018-03-01
Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.
Index of Oral Histories Relating to Naval Research and Development
1985-01-01
Repositories: NWC, DTNSRDC, NHC Individuals mentioned: Amlie, Dr. Thomas S. LaBerge , Dr. Walter McLean, Dr. William B. Parsons, RADM William S. Smith...future of R&D in the Navy. Repositories: NWC, DTNSRDC, NHC Individuals mentioned: Bennett, Dr. Ira Hollingsworth, Dr. Guilford L. LaBerge , Dr. Walter...DTNSRDC, NHC Individuals mentioned: Hunter, Dr. Hugh LaBerge . Dr. Walter McLean, Dr. William B. Brode, Dr. Wallace C. Sage, Dr. Bruce Wilson, Dr. Haskell
1982-09-01
Eufaula Lake, the largest body of water in Oklahoma, extends into McIntosh, Haskell, Pittsburg and Olwiulgee counties , Oklahoma. Construction of the...TABLES iv LIST OF FIGURES vii PROJECT PERSONNEL viii INTRODUCTION 1 Location I Authorization 1 Physical Features 3 Area Description 4 WILDLIFE RESULTS AND...TableLa I Eufaula Lake Project. Summary of pertinent physical 4 characteristics. 2 Eufaula Project. Comparison of terrestrial habitat 9 affected by
Reverse engineering a gene network using an asynchronous parallel evolution strategy
2010-01-01
Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855
Devils in the Dialogue: The Air Force and Congress
2011-06-01
the United States, Article I, Section 8. 111th Cong., 1st sess., 2009, S . Doc 111-4. 118 John Haskell, Congress in Context (Philadelphia, PA...Congress. 12 January 2005. http://digital.library.unt.edu/ ark :/67531/metacrs7624/m1/1/high_res_d/98- 558_2005Jan12.pdf 141 HR 91-1570, FY71 DOD...February 2011 260 Shachtman, Noah . ―Pentagon Chief Rips Heart Out of Army‘s ‗Future‘,‖ Wired, 6 April 2009, http://www.wired.com/dangerroom/2009/04
1992-08-17
01731-5000 UP, No. 1106 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/ MONITORING AGENCY REPORT NUMBER DARPA/NMRO 3701 North...the peaceful uses of nuclear explosives, UCRL -5414, Lawrence Livermore National Laboratory, 1973. Nordyke, M.D., A review of Soviet data on the peaceful...Lawrence Livermore national Laboratory, UCRL -JC-107941, preprint. Haskell, N. A. (1964). Radiation pattern of surface waves from point sources in a
Efficient characterisation of large deviations using population dynamics
NASA Astrophysics Data System (ADS)
Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.
2018-05-01
We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gill, Andy; Bull, Tristan; Kimmell, Garrin; Perrins, Erik; Komp, Ed; Werling, Brett
Kansas Lava is a domain specific language for hardware description. Though there have been a number of previous implementations of Lava, we have found the design space rich, with unexplored choices. We use a direct (Chalmers style) specification of circuits, and make significant use of Haskell overloading of standard classes, leading to concise circuit descriptions. Kansas Lava supports both simulation (inside GHCi), and execution via VHDL, by having a dual shallow and deep embedding inside our Signal type. We also have a lightweight sized-type mechanism, allowing for MATLAB style matrix based specifications to be directly expressed in Kansas Lava.
1983-09-30
investigators - Keiiti Aki, 617/253-6397 u M. Nafi Toksoz, 617/253-6382 Program Manager - William J. Best, 202/767-4908 Short Title of Work - Effects...Distribution Is unlimited. MATTHEW J. ICEfRM SUIMARYChief, tehnial Inforstion 91vilon The work preeated i-n this tl=c’-repO, illustrates results obtained...of such models are those by Ben Menahem [19611, Haskell [19641, Savage 119661, Molnar et al. [19731, Sato and Hirasawa [1973), and Dahlen [1974). The
Laboratory automation in a functional programming language.
Runciman, Colin; Clare, Amanda; Harkness, Rob
2014-12-01
After some years of use in academic and research settings, functional languages are starting to enter the mainstream as an alternative to more conventional programming languages. This article explores one way to use Haskell, a functional programming language, in the development of control programs for laboratory automation systems. We give code for an example system, discuss some programming concepts that we need for this example, and demonstrate how the use of functional programming allows us to express and verify properties of the resulting code. © 2014 Society for Laboratory Automation and Screening.
Microscale bioprocess optimisation.
Micheletti, Martina; Lye, Gary J
2006-12-01
Microscale processing techniques offer the potential to speed up the delivery of new drugs to the market, reducing development costs and increasing patient benefit. These techniques have application across both the chemical and biopharmaceutical sectors. The approach involves the study of individual bioprocess operations at the microlitre scale using either microwell or microfluidic formats. In both cases the aim is to generate quantitative bioprocess information early on, so as to inform bioprocess design and speed translation to the manufacturing scale. Automation can enhance experimental throughput and will facilitate the parallel evaluation of competing biocatalyst and process options.
Smith, D.E.; Aagaard, Brad T.; Heaton, T.H.
2005-01-01
We investigate whether a shallow-dipping thrust fault is prone to waveslip interactions via surface-reflected waves affecting the dynamic slip. If so, can these interactions create faults that are opaque to radiated energy? Furthermore, in this case of a shallow-dipping thrust fault, can incorrectly assuming a transparent fault while using dislocation theory lead to underestimates of seismic moment? Slip time histories are generated in three-dimensional dynamic rupture simulations while allowing for varying degrees of wave-slip interaction controlled by fault-friction models. Based on the slip time histories, P and SH seismograms are calculated for stations at teleseismic distances. The overburdening pressure caused by gravity eliminates mode I opening except at the tip of the fault near the surface; hence, mode I opening has no effect on the teleseismic signal. Normalizing by a Haskell-like traditional kinematic rupture, we find teleseismic peak-to-peak displacement amplitudes are approximately 1.0 for both P and SH waves, except for the unrealistic case of zero sliding friction. Zero sliding friction has peak-to-peak amplitudes of 1.6 for P and 2.0 for SH waves; the fault slip oscillates about its equilibrium value, resulting in a large nonzero (0.08 Hz) spectral peak not seen in other ruptures. These results indicate wave-slip interactions associated with surface-reflected phases in real earthquakes should have little to no effect on teleseismic motions. Thus, Haskell-like kinematic dislocation theory (transparent fault conditions) can be safety used to simulate teleseismic waveforms in the Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, A.; Davis, A.; University of Wisconsin-Madison, Madison, WI 53706
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise tomore » extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)« less
Evaluation of French and English MeSH Indexing Systems with a Parallel Corpus
Névéol, Aurélie; Mork, James G.; Aronson, Alan R.; Darmoni, Stefan J.
2005-01-01
Objective This paper presents the evaluation of two MeSH® indexing systems for French and English on a parallel corpus. Material and methods We describe two automatic MeSH indexing systems - MTI for English, and MAIF for French. The French version of the evaluation resources has been manually indexed with MeSH keyword/qualifier pairs. This professional indexing is used as our gold standard in the evaluation of both systems on keyword retrieval. Results The English system (MTI) obtains significantly better precision and recall (78% precision and 21% recall at rank 1, vs. 37%. precision and 6% recall for MAIF ). Moreover, the performance of both systems can be optimised by the breakage function used by the French system (MAIF), which selects an adaptive number of descriptors for each resource indexed. Conclusion MTI achieves better performance. However, both systems have features that can benefit each other. PMID:16779103
Karayianni, Katerina N; Grimaldi, Keith A; Nikita, Konstantina S; Valavanis, Ioannis K
2015-01-01
This paper aims to enlighten the complex etiology beneath obesity by analysing data from a large nutrigenetics study, in which nutritional and genetic factors associated with obesity were recorded for around two thousand individuals. In our previous work, these data have been analysed using artificial neural network methods, which identified optimised subsets of factors to predict one's obesity status. These methods did not reveal though how the selected factors interact with each other in the obtained predictive models. For that reason, parallel Multifactor Dimensionality Reduction (pMDR) was used here to further analyse the pre-selected subsets of nutrigenetic factors. Within pMDR, predictive models using up to eight factors were constructed, further reducing the input dimensionality, while rules describing the interactive effects of the selected factors were derived. In this way, it was possible to identify specific genetic variations and their interactive effects with particular nutritional factors, which are now under further study.
1993-04-01
or 1 -Chloro-1,1- difluoroethane . Its physical properties are listed in Table 3. 27 0 00 00 0 0 0 HCFC-142b has very low acute toxicity with and LC5 0...Ninety-day Inhalation Exposure of Rats and Dogs to Vapors of 2,2-dichloro-l, 1 ,l-trifluoroethane (FC-123). Haskell Laboratory Report No. 229-78. • Drysdale...AL-TR-1993-0047 AD-A272 695,Il , il’ 11111 1111 ll/i ( ( 1 I~!,ll• M A PROPOSED METHODOLOGY FOR S COMBUSTION TOXICOLOGY TESTING T OF COMBINED HALON
GPC: General Polygon Clipper library
NASA Astrophysics Data System (ADS)
Murta, Alan
2015-12-01
The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result.
High-level ab initio studies of NO(X2Π)-O2(X3Σg -) van der Waals complexes in quartet states
NASA Astrophysics Data System (ADS)
Grein, Friedrich
2018-05-01
Geometry optimisations were performed on nine different structures of NO(X2Π)-O2(X3Σg-) van der Waals complexes in their quartet states, using the explicitly correlated RCCSD(T)-F12b method with basis sets up to the cc-pVQZ-F12 level. For the most stable configurations, counterpoise-corrected optimisations as well as extrapolations to the complete basis set (CBS) were performed. The X structure in the 4A‧ state was found to be most stable, with a CBS binding energy of -157 cm-1. The slipped tilted structures with N closer to O2 (Slipt-N), as well as the slipped parallel structure with O of NO closer to O2 (Slipp-O) in 4A″ states have binding energies of about -130 cm-1. C2v and linear complexes are less stable. According to calculated harmonic frequencies, the X isomer is bound. Isotropic hyperfine coupling constants of the complex are compared with those of the monomers.
Recent advances in characterisation of subsonic axisymmetric nozzles
NASA Astrophysics Data System (ADS)
Tesař, Václav
2018-06-01
Nozzles are devices generating jets. They are widely used in fluidics and also in active control of flows past bodies. Being practically always a component of larger system, design and optimisation of the system needs characterisation of nozzle properties by an invariant quantity. Perhaps surprisingly, no suitable invariant has been so far introduced. This article surveys approaches to characterisation quantities and presents several examples of their typical use in systems such as parallel operation of two nozzles, matching a nozzle to its fluid supply source, apparent resistance increase in flows with pulsation, and the secondary invariants of a family of quasi-similar nozzles.
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Laflin, Brenda E. Gile; Kemmerly, Guy T.; Campbell, Bryan A.
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Astrophysics Data System (ADS)
Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio
2017-07-01
The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Indian Affairs.
A Senate committee hearing received testimony on the Equity in Educational Land Grant Status Act, which would extend land-grant status and concomitant federal aid to 29 Indian tribal colleges and postsecondary institutions. Senators and representatives of the National Association of State Universities and Land Grant Colleges, Navajo Community…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamida, B A; Cheng, X S; Harun, S W
A wideband and flat gain erbium-doped fibre amplifier (EDFA) is demonstrated using a hybrid gain medium of a zirconiabased erbium-doped fibre (Zr-EDF) and a high concentration erbium-doped fibre (EDF). The amplifier has two stages comprising a 2-m-long ZEDF and 9-m-long EDF optimised for C- and L-band operations, respectively, in a double-pass parallel configuration. A chirp fibre Bragg grating (CFBG) is used in both stages to ensure double propagation of the signal and thus to increase the attainable gain in both C- and L-band regions. At an input signal power of 0 dBm, a flat gain of 15 dB is achievedmore » with a gain variation of less than 0.5 dB within a wide wavelength range from 1530 to 1605 nm. The corresponding noise figure varies from 6.2 to 10.8 dB within this wavelength region.« less
NASA Astrophysics Data System (ADS)
Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang
2016-08-01
For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.
Hueston, W; Travis, D; van Klink, E
2011-04-01
The effectiveness of risk mitigation may be compromised by informal trade, including illegal activities, parallel markets and extra-legal activities. While no regulatory system is 100% effective in eliminating the risk of disease transmission through animal and animal product trade, extreme risk aversion in formal import health regulations may increase informal trade, with the unintended consequence of creating additional risks outside regulatory purview. Optimal risk mitigation on a national scale requires scientifically sound yet flexible mitigation strategies that can address the competing risks of formal and informal trade. More robust risk analysis and creative engagement of nontraditional partners provide avenues for addressing informal trade.
A Case Study in Web 2.0 Application Development
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.
2010-12-01
Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Focal ratio degradation: a new perspective
NASA Astrophysics Data System (ADS)
Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss
2008-07-01
We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2017-01-01
Knee joint kinematics derived from multi-body optimisation (MBO) still requires evaluation. The objective of this study was to corroborate model-derived kinematics of osteoarthritic knees obtained using four generic knee joint models used in musculoskeletal modelling - spherical, hinge, degree-of-freedom coupling curves and parallel mechanism - against reference knee kinematics measured by stereo-radiography. Root mean square errors ranged from 0.7° to 23.4° for knee rotations and from 0.6 to 9.0 mm for knee displacements. Model-derived knee kinematics computed from generic knee joint models was inaccurate. Future developments and experiments should improve the reliability of osteoarthritic knee models in MBO and musculoskeletal modelling.
Resonant tunneling based graphene quantum dot memristors.
Pan, Xuan; Skafidas, Efstratios
2016-12-08
In this paper, we model two-terminal all graphene quantum dot (GQD) based resistor-type memory devices (memristors). The resistive switching is achieved by resonant electron tunneling. We show that parallel GQDs can be used to create multi-state memory circuits. The number of states can be optimised with additional voltage sources, whilst the noise margin for each state can be controlled by appropriately choosing the branch resistance. A three-terminal GQD device configuration is also studied. The addition of an isolated gate terminal can be used to add further or modify the states of the memory device. The proposed devices provide a promising route towards volatile memory devices utilizing only atomically thin two-dimensional graphene.
Study of Background Rejection Systems for the IXO Mission.
NASA Astrophysics Data System (ADS)
Laurent, Philippe; Limousin, O.; Tatischeff, V.
2009-01-01
The scientific performances of the IXO mission will necessitate a very low detector background level. This will imply thorough background simulations, and efficient background rejection systems. It necessitates also a very good knowledge of the detectors to be shielded. In APC, Paris, and CEA, Saclay, we got experience on these activities by conceiving and optimising in parallel the high energy detector and the active and passive background rejection system of the Simbol-X mission. Considering that this work may be naturally extended to other X-ray missions, we have initiated with CNES a R&D project on the study of background rejection systems mainly in view the IXO project. We will detail this activity in the poster.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.
Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre
2017-06-01
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Richard, Vincent; Lamberto, Giuliano; Lu, Tung-Wu; Cappozzo, Aurelio; Dumas, Raphaël
2016-01-01
The use of multi-body optimisation (MBO) to estimate joint kinematics from stereophotogrammetric data while compensating for soft tissue artefact is still open to debate. Presently used joint models embedded in MBO, such as mechanical linkages, constitute a considerable simplification of joint function, preventing a detailed understanding of it. The present study proposes a knee joint model where femur and tibia are represented as rigid bodies connected through an elastic element the behaviour of which is described by a single stiffness matrix. The deformation energy, computed from the stiffness matrix and joint angles and displacements, is minimised within the MBO. Implemented as a "soft" constraint using a penalty-based method, this elastic joint description challenges the strictness of "hard" constraints. In this study, estimates of knee kinematics obtained using MBO embedding four different knee joint models (i.e., no constraints, spherical joint, parallel mechanism, and elastic joint) were compared against reference kinematics measured using bi-planar fluoroscopy on two healthy subjects ascending stairs. Bland-Altman analysis and sensitivity analysis investigating the influence of variations in the stiffness matrix terms on the estimated kinematics substantiate the conclusions. The difference between the reference knee joint angles and displacements and the corresponding estimates obtained using MBO embedding the stiffness matrix showed an average bias and standard deviation for kinematics of 0.9±3.2° and 1.6±2.3 mm. These values were lower than when no joint constraints (1.1±3.8°, 2.4±4.1 mm) or a parallel mechanism (7.7±3.6°, 1.6±1.7 mm) were used and were comparable to the values obtained with a spherical joint (1.0±3.2°, 1.3±1.9 mm). The study demonstrated the feasibility of substituting an elastic joint for more classic joint constraints in MBO.
Richard, Vincent; Lamberto, Giuliano; Lu, Tung-Wu; Cappozzo, Aurelio; Dumas, Raphaël
2016-01-01
The use of multi-body optimisation (MBO) to estimate joint kinematics from stereophotogrammetric data while compensating for soft tissue artefact is still open to debate. Presently used joint models embedded in MBO, such as mechanical linkages, constitute a considerable simplification of joint function, preventing a detailed understanding of it. The present study proposes a knee joint model where femur and tibia are represented as rigid bodies connected through an elastic element the behaviour of which is described by a single stiffness matrix. The deformation energy, computed from the stiffness matrix and joint angles and displacements, is minimised within the MBO. Implemented as a “soft” constraint using a penalty-based method, this elastic joint description challenges the strictness of “hard” constraints. In this study, estimates of knee kinematics obtained using MBO embedding four different knee joint models (i.e., no constraints, spherical joint, parallel mechanism, and elastic joint) were compared against reference kinematics measured using bi-planar fluoroscopy on two healthy subjects ascending stairs. Bland-Altman analysis and sensitivity analysis investigating the influence of variations in the stiffness matrix terms on the estimated kinematics substantiate the conclusions. The difference between the reference knee joint angles and displacements and the corresponding estimates obtained using MBO embedding the stiffness matrix showed an average bias and standard deviation for kinematics of 0.9±3.2° and 1.6±2.3 mm. These values were lower than when no joint constraints (1.1±3.8°, 2.4±4.1 mm) or a parallel mechanism (7.7±3.6°, 1.6±1.7 mm) were used and were comparable to the values obtained with a spherical joint (1.0±3.2°, 1.3±1.9 mm). The study demonstrated the feasibility of substituting an elastic joint for more classic joint constraints in MBO. PMID:27314586
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...
2017-04-24
Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less
Genetically improved BarraCUDA.
Langdon, W B; Lam, Brian Yee Hong
2017-01-01
BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.
Hind, Daniel; Parkin, James; Whitworth, Victoria; Rex, Saleema; Young, Tracey; Hampson, Lisa; Sheehan, Jennie; Maguire, Chin; Cantrill, Hannah; Scott, Elaine; Epps, Heather; Main, Marion; Geary, Michelle; McMurchie, Heather; Pallant, Lindsey; Woods, Daniel; Freeman, Jennifer; Lee, Ellen; Eagle, Michelle; Willis, Tracey; Muntoni, Francesco; Baxter, Peter
2017-01-01
Standard treatment of Duchenne muscular dystrophy (DMD) includes regular physiotherapy. There are no data to show whether adding aquatic therapy (AT) to land-based exercises helps maintain motor function. We assessed the feasibility of recruiting and collecting data from boys with DMD in a parallel-group pilot randomised trial (primary objective), also assessing how intervention and trial procedures work. Ambulant boys with DMD aged 7-16 years established on steroids, with North Star Ambulatory Assessment (NSAA) score ≥8, who were able to complete a 10-m walk test without aids or assistance, were randomly allocated (1:1) to 6 months of either optimised land-based exercises 4 to 6 days/week, defined by local community physiotherapists, or the same 4 days/week plus AT 2 days/week. Those unable to commit to a programme, with >20% variation between NSAA scores 4 weeks apart, or contraindications to AT were excluded. The main outcome measures included feasibility of recruiting 40 participants in 6 months from six UK centres, clinical outcomes including NSAA, independent assessment of treatment optimisation, participant/therapist views on acceptability of intervention and research protocols, value of information (VoI) analysis and cost-impact analysis. Over 6 months, 348 boys were screened: most lived too far from centres or were enrolled in other trials; 12 (30% of the targets) were randomised to AT ( n = 8) or control ( n = 4). The mean change in NSAA at 6 months was -5.5 (SD 7.8) in the control arm and -2.8 (SD 4.1) in the AT arm. Harms included fatigue in two boys, pain in one. Physiotherapists and parents valued AT but believed it should be delivered in community settings. Randomisation was unattractive to families, who had already decided that AT was useful and who often preferred to enrol in drug studies. The AT prescription was considered to be optimised for three boys, with other boys given programmes that were too extensive and insufficiently focused. Recruitment was insufficient for VoI analysis. Neither a UK-based RCT of AT nor a twice weekly AT therapy delivered at tertiary centres is feasible. Our study will help in the optimisation of AT service provision and the design of future research. ISRCTN41002956.
Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe
2014-02-26
Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
A Dynamic Finite Element Method for Simulating the Physics of Faults Systems
NASA Astrophysics Data System (ADS)
Saez, E.; Mora, P.; Gross, L.; Weatherley, D.
2004-12-01
We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.
Design and implementation of a high performance network security processor
NASA Astrophysics Data System (ADS)
Wang, Haixin; Bai, Guoqiang; Chen, Hongyi
2010-03-01
The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
ABLE project: Development of an advanced lead-acid storage system for autonomous PV installations
NASA Astrophysics Data System (ADS)
Lemaire-Potteau, Elisabeth; Vallvé, Xavier; Pavlov, Detchko; Papazov, G.; Borg, Nico Van der; Sarrau, Jean-François
In the advanced battery for low-cost renewable energy (ABLE) project, the partners have developed an advanced storage system for small and medium-size PV systems. It is composed of an innovative valve-regulated lead-acid (VRLA) battery, optimised for reliability and manufacturing cost, and an integrated regulator, for optimal battery management and anti-fraudulent use. The ABLE battery performances are comparable to flooded tubular batteries, which are the reference in medium-size PV systems. The ABLE regulator has several innovative features regarding energy management and modular series/parallel association. The storage system has been validated by indoor, outdoor and field tests, and it is expected that this concept could be a major improvement for large-scale implementation of PV within the framework of national rural electrification schemes.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
Formal semantics for a subset of VHDL and its use in analysis of the FTPP scoreboard circuit
NASA Technical Reports Server (NTRS)
Bickford, Mark
1994-01-01
In the first part of the report, we give a detailed description of an operational semantics for a large subset of VHDL, the VHSIC Hardware Description Language. The semantics is written in the functional language Caliban, similar to Haskell, used by the theorem prover Clio. We also describe a translator from VHDL into Caliban semantics and give some examples of its use. In the second part of the report, we describe our experience in using the VHDL semantics to try to verify a large VHDL design. We were not able to complete the verification due to certain complexities of VHDL which we discuss. We propose a VHDL verification method that addresses the problems we encountered but which builds on the operational semantics described in the first part of the report.
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
Wett, B; Schoen, M; Phothilangka, P; Wackerle, F; Insam, H
2007-01-01
Different digestion technologies for various substrates are addressed by the generic process description of Anaerobic Digestion Model No. 1. In the case of manure or agricultural wastes a priori knowledge about the substrate in terms of ADM1 compounds is lacking and influent characterisation becomes a major issue. The actual project has been initiated for promotion of biogas technology in agriculture and for expansion of profitability also to rather small capacity systems. In order to avoid costly individual planning and installation of each facility a standardised design approach needs to be elaborated. This intention pleads for bio kinetic modelling as a systematic tool for process design and optimisation. Cofermentation under field conditions was observed, quality data and flow data were recorded and mass flow balances were calculated. In the laboratory different substrates have been digested separately in parallel under specified conditions. A configuration of four ADM1 model reactors was set up. Model calibration identified disintegration rate, decay rates for sugar degraders and half saturation constant for sugar as the three most sensitive parameters showing values (except the latter) about one order of magnitude higher than default parameters. Finally, the model is applied to the comparison of different reactor configurations and volume partitions. Another optimisation objective is robustness and load flexibility, i.e. the same configuration should be adaptive to different load situations only by a simple recycle control in order to establish a standardised design.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
A radiation-tolerant electronic readout system for portal imaging
NASA Astrophysics Data System (ADS)
Östling, J.; Brahme, A.; Danielsson, M.; Iacobaeus, C.; Peskov, V.
2004-06-01
A new electronic portal imaging device, EPID, is under development at the Karolinska Institutet and the Royal Institute of Technology. Due to considerable demands on radiation tolerance in the radiotherapy environment, a dedicated electronic readout system has been designed. The most interesting aspect of the readout system is that it allows to read out ˜1000 pixels in parallel, with all electronics placed outside the radiation beam—making the detector more radiation resistant. In this work we are presenting the function of a small prototype (6×100 pixels) of the electronic readout board that has been tested. Tests were made with continuous X-rays (10-60 keV) and with α particles. The results show that, without using an optimised gas mixture and with an early prototype only, the electronic readout system still works very well.
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Optimisation of the hybrid renewable energy system by HOMER, PSO and CPSO for the study area
NASA Astrophysics Data System (ADS)
Khare, Vikas; Nema, Savita; Baredar, Prashant
2017-04-01
This study is based on simulation and optimisation of the renewable energy system of the police control room at Sagar in central India. To analyse this hybrid system, the meteorological data of solar insolation and hourly wind speeds of Sagar in central India (longitude 78°45‧ and latitude 23°50‧) have been considered. The pattern of load consumption is studied and suitably modelled for optimisation of the hybrid energy system using HOMER software. The results are compared with those of the particle swarm optimisation and the chaotic particle swarm optimisation algorithms. The use of these two algorithms to optimise the hybrid system leads to a higher quality result with faster convergence. Based on the optimisation result, it has been found that replacing conventional energy sources by the solar-wind hybrid renewable energy system will be a feasible solution for the distribution of electric power as a stand-alone application at the police control room. This system is more environmentally friendly than the conventional diesel generator. The fuel cost reduction is approximately 70-80% more than that of the conventional diesel generator.
Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar
NASA Astrophysics Data System (ADS)
Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2017-05-01
Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.
NASA Astrophysics Data System (ADS)
Soriano, Allan N.; Adamos, Kristoni G.; Bonifacio, Pauline B.; Adornado, Adonis P.; Bungay, Vergel C.; Vairavan, Rajendaran
2017-11-01
The fate of antibiotics entering the environment raised concerns on the possible effect of antimicrobial resistance bacteria. Prediction of the fate and transport of these particles are needed to be determined, significantly the diffusion coefficient of antibiotic in water at infinite dilution. A systematic determination of diffusion coefficient of antibiotic in water at infinite dilution of five different kinds of livestock antibiotics namely: Amtyl, Ciprotyl, Doxylak Forte, Trisullak, and Vetracin Gold in the 293.15 to 313.15 K temperature range are reported through the use of the method involving the electrolytic conductivity measurements. A continuous stirred tank reactor is utilized to measure the electrolytic conductivities of the considered systems. These conductivities are correlated by using the Nernst-Haskell equation to determine the infinite dilution diffusion coefficient. Determined diffusion coefficients are based on the assumption that in dilute solution, these antibiotics behave as strong electrolyte from which H+ cation dissociate from the antibiotic's anion.
On the analytic and numeric optimisation of airplane trajectories under real atmospheric conditions
NASA Astrophysics Data System (ADS)
Gonzalo, J.; Domínguez, D.; López, D.
2014-12-01
From the beginning of aviation era, economic constraints have forced operators to continuously improve the planning of the flights. The revenue is proportional to the cost per flight and the airspace occupancy. Many methods, the first started in the middle of last century, have explore analytical, numerical and artificial intelligence resources to reach the optimal flight planning. In parallel, advances in meteorology and communications allow an almost real-time knowledge of the atmospheric conditions and a reliable, error-bounded forecast for the near future. Thus, apart from weather risks to be avoided, airplanes can dynamically adapt their trajectories to minimise their costs. International regulators are aware about these capabilities, so it is reasonable to envisage some changes to allow this dynamic planning negotiation to soon become operational. Moreover, current unmanned airplanes, very popular and often small, suffer the impact of winds and other weather conditions in form of dramatic changes in their performance. The present paper reviews analytic and numeric solutions for typical trajectory planning problems. Analytic methods are those trying to solve the problem using the Pontryagin principle, where influence parameters are added to state variables to form a split condition differential equation problem. The system can be solved numerically -indirect optimisation- or using parameterised functions -direct optimisation-. On the other hand, numerical methods are based on Bellman's dynamic programming (or Dijkstra algorithms), where the fact that two optimal trajectories can be concatenated to form a new optimal one if the joint point is demonstrated to belong to the final optimal solution. There is no a-priori conditions for the best method. Traditionally, analytic has been more employed for continuous problems whereas numeric for discrete ones. In the current problem, airplane behaviour is defined by continuous equations, while wind fields are given in a discrete grid at certain time intervals. The research demonstrates advantages and disadvantages of each method as well as performance figures of the solutions found for typical flight conditions under static and dynamic atmospheres. This provides significant parameters to be used in the selection of solvers for optimal trajectories.
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
On the design and optimisation of new fractal antenna using PSO
NASA Astrophysics Data System (ADS)
Rani, Shweta; Singh, A. P.
2013-10-01
An optimisation technique for newly shaped fractal structure using particle swarm optimisation with curve fitting is presented in this article. The aim of particle swarm optimisation is to find the geometry of the antenna for the required user-defined frequency. To assess the effectiveness of the presented method, a set of representative numerical simulations have been done and the results are compared with the measurements from experimental prototypes built according to the design specifications coming from the optimisation procedure. The proposed fractal antenna resonates at the 5.8 GHz industrial, scientific and medical band which is suitable for wireless telemedicine applications. The antenna characteristics have been studied using extensive numerical simulations and are experimentally verified. The antenna exhibits well-defined radiation patterns over the band.
Ice-sheet modelling accelerated by graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek
2014-11-01
Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.
In situ click chemistry: a powerful means for lead discovery.
Sharpless, K Barry; Manetsch, Roman
2006-11-01
Combinatorial chemistry and parallel synthesis are important and regularly applied tools for lead identification and optimisation, although they are often accompanied by challenges related to the efficiency of library synthesis and the purity of the compound library. In the last decade, novel means of lead discovery approaches have been investigated where the biological target is actively involved in the synthesis of its own inhibitory compound. These fragment-based approaches, also termed target-guided synthesis (TGS), show great promise in lead discovery applications by combining the synthesis and screening of libraries of low molecular weight compounds in a single step. Of all the TGS methods, the kinetically controlled variant is the least well known, but it has the potential to emerge as a reliable lead discovery method. The kinetically controlled TGS approach, termed in situ click chemistry, is discussed in this article.
Strutton, Benjamin; Jaffé, Stephen R P; Pandhal, Jagroop; Wright, Phillip C
2018-01-01
Although Escherichia coli has been engineered to perform N-glycosylation of recombinant proteins, an optimal glycosylating strain has not been created. By inserting a codon optimised Campylobacter oligosaccharyltransferase onto the E. coli chromosome, we created a glycoprotein platform strain, where the target glycoprotein, sugar synthesis and glycosyltransferase enzymes, can be inserted using expression vectors to produce the desired homogenous glycoform. To assess the functionality and glycoprotein producing capacity of the chromosomally based OST, a combined Western blot and parallel reaction monitoring mass spectrometry approach was applied, with absolute quantification of glycoprotein. We demonstrated that chromosomal oligosaccharyltransferase remained functional and facilitated N-glycosylation. Although the engineered strain produced less total recombinant protein, the glycosylation efficiency increased by 85%, and total glycoprotein production was enhanced by 17%. Copyright © 2017 Elsevier Inc. All rights reserved.
Using Network Dynamical Influence to Drive Consensus
NASA Astrophysics Data System (ADS)
Punzo, Giuliano; Young, George F.; MacDonald, Malcolm; Leonard, Naomi E.
2016-05-01
Consensus and decision-making are often analysed in the context of networks, with many studies focusing attention on ranking the nodes of a network depending on their relative importance to information routing. Dynamical influence ranks the nodes with respect to their ability to influence the evolution of the associated network dynamical system. In this study it is shown that dynamical influence not only ranks the nodes, but also provides a naturally optimised distribution of effort to steer a network from one state to another. An example is provided where the “steering” refers to the physical change in velocity of self-propelled agents interacting through a network. Distinct from other works on this subject, this study looks at directed and hence more general graphs. The findings are presented with a theoretical angle, without targeting particular applications or networked systems; however, the framework and results offer parallels with biological flocks and swarms and opportunities for design of technological networks.
Biological removal of NOx from flue gas.
Kumaraswamy, R; Muyzer, G; Kuenen, J G; Loosdrecht, M C M
2004-01-01
BioDeNOx is a novel integrated physico-chemical and biological process for the removal of nitrogen oxides (NOx) from flue gas. Due to the high temperature of flue gas the process is performed at a temperature between 50-55 degrees C. Flue gas containing CO2, O2, SO2 and NOx, is purged through Fe(II)EDTA2- containing liquid. The Fe(II)EDTA2- complex effectively binds the NOx; the bound NOx is converted into N2 in a complex reaction sequence. In this paper an overview of the potential microbial reactions in the BioDeNOx process is discussed. It is evident that though the process looks simple, due to the large number of parallel potential reactions and serial microbial conversions, it is much more complex. There is a need for a detailed investigation in order to properly understand and optimise the process.
Bryant, Maria; Burton, Wendy; Cundill, Bonnie; Farrin, Amanda J; Nixon, Jane; Stevens, June; Roberts, Kim; Foy, Robbie; Rutter, Harry; Hartley, Suzanne; Tubeuf, Sandy; Collinson, Michelle; Brown, Julia
2017-01-24
Family-based interventions to prevent childhood obesity depend upon parents' taking action to improve diet and other lifestyle behaviours in their families. Programmes that attract and retain high numbers of parents provide an enhanced opportunity to improve public health and are also likely to be more cost-effective than those that do not. We have developed a theory-informed optimisation intervention to promote parent engagement within an existing childhood obesity prevention group programme, HENRY (Health Exercise Nutrition for the Really Young). Here, we describe a proposal to evaluate the effectiveness of this optimisation intervention in regard to the engagement of parents and cost-effectiveness. The Optimising Family Engagement in HENRY (OFTEN) trial is a cluster randomised controlled trial being conducted across 24 local authorities (approximately 144 children's centres) which currently deliver HENRY programmes. The primary outcome will be parental enrolment and attendance at the HENRY programme, assessed using routinely collected process data. Cost-effectiveness will be presented in terms of primary outcomes using acceptability curves and through eliciting the willingness to pay for the optimisation from HENRY commissioners. Secondary outcomes include the longitudinal impact of the optimisation, parent-reported infant intake of fruits and vegetables (as a proxy to compliance) and other parent-reported family habits and lifestyle. This innovative trial will provide evidence on the implementation of a theory-informed optimisation intervention to promote parent engagement in HENRY, a community-based childhood obesity prevention programme. The findings will be generalisable to other interventions delivered to parents in other community-based environments. This research meets the expressed needs of commissioners, children's centres and parents to optimise the potential impact that HENRY has on obesity prevention. A subsequent cluster randomised controlled pilot trial is planned to determine the practicality of undertaking a definitive trial to robustly evaluate the effectiveness and cost-effectiveness of the optimised intervention on childhood obesity prevention. ClinicalTrials.gov identifier: NCT02675699 . Registered on 4 February 2016.
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
Incompressible SPH (ISPH) with fast Poisson solver on a GPU
NASA Astrophysics Data System (ADS)
Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.
2018-05-01
This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.
NASA Astrophysics Data System (ADS)
Chan, Man Ching Esther; Clarke, David; Cao, Yiming
2018-03-01
Interactive problem solving and learning are priorities in contemporary education, but these complex processes have proved difficult to research. This project addresses the question "How do we optimise social interaction for the promotion of learning in a mathematics classroom?" Employing the logic of multi-theoretic research design, this project uses the newly built Science of Learning Research Classroom (ARC-SR120300015) at The University of Melbourne and equivalent facilities in China to investigate classroom learning and social interactions, focusing on collaborative small group problem solving as a way to make the social aspects of learning visible. In Australia and China, intact classes of local year 7 students with their usual teacher will be brought into the research classroom facilities with built-in video cameras and audio recording equipment to participate in purposefully designed activities in mathematics. The students will undertake a sequence of tasks in the social units of individual, pair, small group (typically four students) and whole class. The conditions for student collaborative problem solving and learning will be manipulated so that student and teacher contributions to that learning process can be distinguished. Parallel and comparative analyses will identify culture-specific interactive patterns and provide the basis for hypotheses about the learning characteristics underlying collaborative problem solving performance documented in the research classrooms in each country. The ultimate goals of the project are to generate, develop and test more sophisticated hypotheses for the optimisation of social interaction in the mathematics classroom in the interest of improving learning and, particularly, student collaborative problem solving.
Asselineau, Charles-Alexis; Zapata, Jose; Pye, John
2015-06-01
A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
Optimisation of SOA-REAMs for hybrid DWDM-TDMA PON applications.
Naughton, Alan; Antony, Cleitus; Ossieur, Peter; Porto, Stefano; Talli, Giuseppe; Townsend, Paul D
2011-12-12
We demonstrate how loss-optimised, gain-saturated SOA-REAM based reflective modulators can reduce the burst to burst power variations due to differential access loss in the upstream path in carrier distributed passive optical networks by 18 dB compared to fixed linear gain modulators. We also show that the loss optimised device has a high tolerance to input power variations and can operate in deep saturation with minimal patterning penalties. Finally, we demonstrate that an optimised device can operate across the C-Band and also over a transmission distance of 80 km. © 2011 Optical Society of America
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
PWHATSHAP: efficient haplotyping for future generation sequencing.
Bracciali, Andrea; Aldinucci, Marco; Patterson, Murray; Marschall, Tobias; Pisanti, Nadia; Merelli, Ivan; Torquati, Massimo
2016-09-22
Haplotype phasing is an important problem in the analysis of genomics information. Given a set of DNA fragments of an individual, it consists of determining which one of the possible alleles (alternative forms of a gene) each fragment comes from. Haplotype information is relevant to gene regulation, epigenetics, genome-wide association studies, evolutionary and population studies, and the study of mutations. Haplotyping is currently addressed as an optimisation problem aiming at solutions that minimise, for instance, error correction costs, where costs are a measure of the confidence in the accuracy of the information acquired from DNA sequencing. Solutions have typically an exponential computational complexity. WHATSHAP is a recent optimal approach which moves computational complexity from DNA fragment length to fragment overlap, i.e., coverage, and is hence of particular interest when considering sequencing technology's current trends that are producing longer fragments. Given the potential relevance of efficient haplotyping in several analysis pipelines, we have designed and engineered PWHATSHAP, a parallel, high-performance version of WHATSHAP. PWHATSHAP is embedded in a toolkit developed in Python and supports genomics datasets in standard file formats. Building on WHATSHAP, PWHATSHAP exhibits the same complexity exploring a number of possible solutions which is exponential in the coverage of the dataset. The parallel implementation on multi-core architectures allows for a relevant reduction of the execution time for haplotyping, while the provided results enjoy the same high accuracy as that provided by WHATSHAP, which increases with coverage. Due to its structure and management of the large datasets, the parallelisation of WHATSHAP posed demanding technical challenges, which have been addressed exploiting a high-level parallel programming framework. The result, PWHATSHAP, is a freely available toolkit that improves the efficiency of the analysis of genomics information.
ERIC Educational Resources Information Center
Mooij, Ton
2004-01-01
Specific combinations of educational and ICT conditions including computer use may optimise learning processes, particularly for learners at risk. This position paper asks which curricular, instructional, and ICT characteristics can be expected to optimise learning processes and outcomes, and how to best achieve this optimization. A theoretical…
Børretzen, P; Salbu, B
2000-10-30
To assess the impact of radionuclides entering the marine environment from dumped nuclear waste, information on the physico-chemical forms of radionuclides and their mobility in seawater-sediment systems is essential. Due to interactions with sediment components, sediments may act as a sink, reducing the mobility of radionuclides in seawater. Due to remobilisation, however, contaminated sediments may also act as a potential source of radionuclides to the water phase. In the present work, time-dependent interactions of low molecular mass (LMM, i.e. species < 10 kDa) radionuclides with sediments from the Stepovogo Fjord, Novaya Zemlya and their influence on the distribution coefficients (Kd values) have been studied in tracer experiments using 109Cd2+ and 60Co2+ as gamma tracers. Sorption of the LMM tracers occurred rapidly and the estimated equilibrium Kd(eq)-values for 109Cd and 60Co were 500 and 20000 ml/g, respectively. Remobilisation of 109Cd and 60Co from contaminated sediment fractions as a function of contact time was studied using sequential extraction procedures. Due to redistribution, the reversibly bound fraction of the gamma tracers decreased with time, while the irreversibly (or slowly reversibly) associated fraction of the gamma tracers increased. Two different three-compartment models, one consecutive and one parallel, were applied to describe the time-dependent interaction of the LMM tracers with operationally defined reversible and irreversible (or slowly reversible) sediment fractions. The interactions between these fractions were described using first order differential equations. By fitting the models to the experimental data, apparent rate constants were obtained using numerical optimisation software. The model optimisations showed that the interactions of LMM 60Co were well described by the consecutive model, while the parallel model was more suitable to describe the interactions of LMM 109Cd with the sediments, when the squared sum of residuals were compared. The rate of sorption of the irreversibly (or slowly reversibly) associated fraction was greater than the rate of desorption of the reversibly bound fractions (i.e. k3 > k2) for both radionuclides. Thus, the Novaya Zemlya sediment are supposed to act as a sink for the radionuclides under oxic conditions, and transport to the water phase should mainly be attributed to resuspended particles.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Distributed optimisation problem with communication delay and external disturbance
NASA Astrophysics Data System (ADS)
Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu
2017-12-01
This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Medicines optimisation: priorities and challenges.
Kaufman, Gerri
2016-03-23
Medicines optimisation is promoted in a guideline published in 2015 by the National Institute for Health and Care Excellence. Four guiding principles underpin medicines optimisation: aim to understand the patient's experience; ensure evidence-based choice of medicines; ensure medicines use is as safe as possible; and make medicines optimisation part of routine practice. Understanding the patient experience is important to improve adherence to medication regimens. This involves communication, shared decision making and respect for patient preferences. Evidence-based choice of medicines is important for clinical and cost effectiveness. Systems and processes for the reporting of medicines-related safety incidents have to be improved if medicines use is to be as safe as possible. Ensuring safe practice in medicines use when patients are transferred between organisations, and managing the complexities of polypharmacy are imperative. A medicines use review can help to ensure that medicines optimisation forms part of routine practice.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
NASA Astrophysics Data System (ADS)
Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue
2016-11-01
This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.
LFRic: Building a new Unified Model
NASA Astrophysics Data System (ADS)
Melvin, Thomas; Mullerworth, Steve; Ford, Rupert; Maynard, Chris; Hobson, Mike
2017-04-01
The LFRic project, named for Lewis Fry Richardson, aims to develop a replacement for the Met Office Unified Model in order to meet the challenges which will be presented by the next generation of exascale supercomputers. This project, a collaboration between the Met Office, STFC Daresbury and the University of Manchester, builds on the earlier GungHo project to redesign the dynamical core, in partnership with NERC. The new atmospheric model aims to retain the performance of the current ENDGame dynamical core and associated subgrid physics, while also enabling a far greater scalability and flexibility to accommodate future supercomputer architectures. Design of the model revolves around a principle of a 'separation of concerns', whereby the natural science aspects of the code can be developed without worrying about the underlying architecture, while machine dependent optimisations can be carried out at a high level. These principles are put into practice through the development of an autogenerated Parallel Systems software layer (known as the PSy layer) using a domain-specific compiler called PSyclone. The prototype model includes a re-write of the dynamical core using a mixed finite element method, in which different function spaces are used to represent the various fields. It is able to run in parallel with MPI and OpenMP and has been tested on over 200,000 cores. In this talk an overview of the both the natural science and computational science implementations of the model will be presented.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project
2005-05-01
thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of
A Method for Decentralised Optimisation in Networks
NASA Astrophysics Data System (ADS)
Saramäki, Jari
2005-06-01
We outline a method for distributed Monte Carlo optimisation of computational problems in networks of agents, such as peer-to-peer networks of computers. The optimisation and messaging procedures are inspired by gossip protocols and epidemic data dissemination, and are decentralised, i.e. no central overseer is required. In the outlined method, each agent follows simple local rules and seeks for better solutions to the optimisation problem by Monte Carlo trials, as well as by querying other agents in its local neighbourhood. With proper network topology, good solutions spread rapidly through the network for further improvement. Furthermore, the system retains its functionality even in realistic settings where agents are randomly switched on and off.
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Optimisation of active suspension control inputs for improved vehicle handling performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor
2016-11-01
Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks
2015-04-01
UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Witold Waldman and Manfred...minimising the peak tangential stresses on multiple segments around the boundary of a hole in a uniaxially-loaded or biaxially-loaded plate . It is based...RELEASE UNCLASSIFIED UNCLASSIFIED Shape Optimisation of Holes in Loaded Plates by Minimisation of Multiple Stress Peaks Executive Summary Aerospace
NASA Astrophysics Data System (ADS)
Harré, Michael S.
2013-02-01
Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.
Optimisation techniques in vaginal cuff brachytherapy.
Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A
2009-11-01
The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.
Holroyd, Kenneth A; Cottrell, Constance K; O'Donnell, Francis J; Cordingley, Gary E; Drew, Jana B; Carlson, Bruce W; Himawan, Lina
2010-09-29
To determine if the addition of preventive drug treatment (β blocker), brief behavioural migraine management, or their combination improves the outcome of optimised acute treatment in the management of frequent migraine. Randomised placebo controlled trial over 16 months from July 2001 to November 2005. Two outpatient sites in Ohio, USA. 232 adults (mean age 38 years; 79% female) with diagnosis of migraine with or without aura according to International Headache Society classification of headache disorders criteria, who recorded at least three migraines with disability per 30 days (mean 5.5 migraines/30 days), during an optimised run-in of acute treatment. Addition of one of four preventive treatments to optimised acute treatment: β blocker (n=53), matched placebo (n=55), behavioural migraine management plus placebo (n=55), or behavioural migraine management plus β blocker (n=69). The primary outcome was change in migraines/30 days; secondary outcomes included change in migraine days/30 days and change in migraine specific quality of life scores. Mixed model analysis showed statistically significant (P≤0.05) differences in outcomes among the four added treatments for both the primary outcome (migraines/30 days) and the two secondary outcomes (change in migraine days/30 days and change in migraine specific quality of life scores). The addition of combined β blocker and behavioural migraine management (-3.3 migraines/30 days, 95% confidence interval -3.2 to -3.5), but not the addition of β blocker alone (-2.1 migraines/30 days, -1.9 to -2.2) or behavioural migraine management alone (-2.2 migraines migraines/30 days, -2.0 to -2.4), improved outcomes compared with optimised acute treatment alone (-2.1 migraines/30 days, -1.9 to -2.2). For a clinically significant (≥50% reduction) in migraines/30 days, the number needed to treat for optimised acute treatment plus combined β blocker and behavioural migraine management was 3.1 compared with optimised acute treatment alone, 2.6 compared with optimised acute treatment plus β blocker, and 3.1 compared with optimised acute treatment plus behavioural migraine management. Results were consistent for the two secondary outcomes, and at both month 10 (the primary endpoint) and month 16. The addition of combined β blocker plus behavioural migraine management, but not the addition of β blocker alone or behavioural migraine management alone, improved outcomes of optimised acute treatment. Combined β blocker treatment and behavioural migraine management may improve outcomes in the treatment of frequent migraine. Clinical trials NCT00910689.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Devos, David; Moreau, Caroline; Maltête, David; Lefaucheur, Romain; Kreisler, Alexandre; Eusebio, Alexandre; Defer, Gilles; Ouk, Thavarak; Azulay, Jean-Philippe; Krystkowiak, Pierre; Witjas, Tatiana; Delliaux, Marie; Destée, Alain; Duhamel, Alain; Bordet, Régis; Defebvre, Luc; Dujardin, Kathy
2014-06-01
Even with optimal dopaminergic treatments, many patients with Parkinson's disease (PD) are frequently incapacitated by apathy prior to the development of dementia. We sought to establish whether rivastigmine's ability to inhibit acetyl- and butyrylcholinesterases could relieve the symptoms of apathy in dementia-free, non-depressed patients with advanced PD. We performed a multicentre, parallel, double-blind, placebo-controlled, randomised clinical trial (Protocol ID: 2008-002578-36; clinicaltrials.gov reference: NCT00767091) in patients with PD with moderate to severe apathy (despite optimised dopaminergic treatment) and without dementia. Patients from five French university hospitals were randomly assigned 1:1 to rivastigmine (transdermal patch of 9.5 mg/day) or placebo for 6 months. The primary efficacy criterion was the change over time in the Lille Apathy Rating Scale (LARS) score. 101 consecutive patients were screened, 31 were eligible and 16 and 14 participants were randomised into the rivastigmine and placebo groups, respectively. Compared with placebo, rivastigmine improved the LARS score (from -11.5 (-15/-7) at baseline to -20 (-25/-12) after treatment; F(1, 25)=5.2; p=0.031; adjusted size effect: -0.9). Rivastigmine also improved the caregiver burden and instrumental activities of daily living but failed to improve quality of life. No severe adverse events occurred in the rivastigmine group. Rivastigmine may represent a new therapeutic option for moderate to severe apathy in advanced PD patients with optimised dopaminergic treatment and without depression dementia. These findings require confirmation in a larger clinical trial. Our results also confirmed that the presence of apathy can herald a pre-dementia state in PD. Clinicaltrials.gov reference: NCT00767091.
Yun, Seong Dae
2017-01-01
The relatively high imaging speed of EPI has led to its widespread use in dynamic MRI studies such as functional MRI. An approach to improve the performance of EPI, EPI with Keyhole (EPIK), has been previously presented and its use in fMRI was verified at 1.5T as well as 3T. The method has been proven to achieve a higher temporal resolution and smaller image distortions when compared to single-shot EPI. Furthermore, the performance of EPIK in the detection of functional signals was shown to be comparable to that of EPI. For these reasons, we were motivated to employ EPIK here for high-resolution imaging. The method was optimised to offer the highest possible in-plane resolution and slice coverage under the given imaging constraints: fixed TR/TE, FOV and acceleration factors for parallel imaging and partial Fourier techniques. The performance of EPIK was evaluated in direct comparison to the optimised protocol obtained from EPI. The two imaging methods were applied to visual fMRI experiments involving sixteen subjects. The results showed that enhanced spatial resolution with a whole-brain coverage was achieved by EPIK (1.00 mm × 1.00 mm; 32 slices) when compared to EPI (1.25 mm × 1.25 mm; 28 slices). As a consequence, enhanced characterisation of functional areas has been demonstrated in EPIK particularly for relatively small brain regions such as the lateral geniculate nucleus (LGN) and superior colliculus (SC); overall, a significantly increased t-value and activation area were observed from EPIK data. Lastly, the use of EPIK for fMRI was validated with the simulation of different types of data reconstruction methods. PMID:28945780
Relativistic g-modes in rapidly rotating neutron stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaertig, Erich; Kokkotas, Kostas D.; Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124
2009-09-15
We study the g-modes of fast rotating stratified neutron stars in the general relativistic Cowling approximation, where we neglect metric perturbations and where the background models take into account the buoyant force due to composition gradients. This is the first paper studying this problem in a general relativistic framework. In a recent paper [A. Passamonti, B. Haskell, N. Andersson, D. I. Jones, and I. Hawke, Mon. Not. R. Astron. Soc. 394, 730 (2009)], a similar study was performed within the Newtonian framework, where the authors presented results about the onset of CFS-unstable g-modes and the close connection between inertial andmore » gravity modes for sufficiently high rotation rates and small composition gradients. This correlation arises from the interplay between the buoyant force which is the restoring force for g-modes and the Coriolis force which is responsible for the existence of inertial modes. In our relativistic treatment of the problem, we find an excellent qualitative agreement with respect to the Newtonian results.« less
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Systemic solutions for multi-benefit water and environmental management.
Everard, Mark; McInnes, Robert
2013-09-01
The environmental and financial costs of inputs to, and unintended consequences arising from narrow consideration of outputs from, water and environmental management technologies highlight the need for low-input solutions that optimise outcomes across multiple ecosystem services. Case studies examining the inputs and outputs associated with several ecosystem-based water and environmental management technologies reveal a range from those that differ little from conventional electro-mechanical engineering techniques through methods, such as integrated constructed wetlands (ICWs), designed explicitly as low-input systems optimising ecosystem service outcomes. All techniques present opportunities for further optimisation of outputs, and hence for greater cumulative public value. We define 'systemic solutions' as "…low-input technologies using natural processes to optimise benefits across the spectrum of ecosystem services and their beneficiaries". They contribute to sustainable development by averting unintended negative impacts and optimising benefits to all ecosystem service beneficiaries, increasing net economic value. Legacy legislation addressing issues in a fragmented way, associated 'ring-fenced' budgets and established management assumptions represent obstacles to implementing 'systemic solutions'. However, flexible implementation of legacy regulations recognising their primary purpose, rather than slavish adherence to detailed sub-clauses, may achieve greater overall public benefit through optimisation of outcomes across ecosystem services. Systemic solutions are not a panacea if applied merely as 'downstream' fixes, but are part of, and a means to accelerate, broader culture change towards more sustainable practice. This necessarily entails connecting a wider network of interests in the formulation and design of mutually-beneficial systemic solutions, including for example spatial planners, engineers, regulators, managers, farming and other businesses, and researchers working on ways to quantify and optimise delivery of ecosystem services. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Person-centred medicines optimisation policy in England: an agenda for research on polypharmacy.
Heaton, Janet; Britten, Nicky; Krska, Janet; Reeve, Joanne
2017-01-01
Aim To examine how patient perspectives and person-centred care values have been represented in documents on medicines optimisation policy in England. There has been growing support in England for a policy of medicines optimisation as a response to the rise of problematic polypharmacy. Conceptually, medicines optimisation differs from the medicines management model of prescribing in being based around the patient rather than processes and systems. This critical examination of current official and independent policy documents questions how central the patient is in them and whether relevant evidence has been utilised in their development. A documentary analysis of reports on medicines optimisation published by the Royal Pharmaceutical Society (RPS), The King's Fund and National Institute for Health and Social Care Excellence since 2013. The analysis draws on a non-systematic review of research on patient experiences of using medicines. Findings The reports varied in their inclusion of patient perspectives and person-centred care values, and in the extent to which they drew on evidence from research on patients' experiences of polypharmacy and medicines use. In the RPS report, medicines optimisation is represented as being a 'step change' from medicines management, in contrast to the other documents which suggest that it is facilitated by the systems and processes that comprise the latter model. Only The King's Fund report considered evidence from qualitative studies of people's use of medicines. However, these studies are not without their limitations. We suggest five ways in which researchers could improve this evidence base and so inform the development of future policy: by facilitating reviews of existing research; conducting studies of patient experiences of polypharmacy and multimorbidity; evaluating medicines optimisation interventions; making better use of relevant theories, concepts and tools; and improving patient and public involvement in research and in guideline development.
Schutyser, M A I; Straatsma, J; Keijzer, P M; Verschueren, M; De Jong, P
2008-11-30
In the framework of a cooperative EU research project (MILQ-QC-TOOL) a web-based modelling tool (Websim-MILQ) was developed for optimisation of thermal treatments in the dairy industry. The web-based tool enables optimisation of thermal treatments with respect to product safety, quality and costs. It can be applied to existing products and processes but also to reduce time to market for new products. Important aspects of the tool are its user-friendliness and its specifications customised to the needs of small dairy companies. To challenge the web-based tool it was applied for optimisation of thermal treatments in 16 dairy companies producing yoghurt, fresh cream, chocolate milk and cheese. Optimisation with WebSim-MILQ resulted in concrete improvements with respect to risk of microbial contamination, cheese yield, fouling and production costs. In this paper we illustrate the use of WebSim-MILQ for optimisation of a cheese milk pasteurisation process where we could increase the cheese yield (1 extra cheese for each 100 produced cheeses from the same amount of milk) and reduced the risk of contamination of pasteurised cheese milk with thermoresistent streptococci from critical to negligible. In another case we demonstrate the advantage for changing from an indirect to a direct heating method for a UHT process resulting in 80% less fouling, while improving product quality and maintaining product safety.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang
2018-04-01
This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.
Bergquist, J; Vona, M J; Stiller, C O; O'Connor, W T; Falkenberg, T; Ekman, R
1996-03-01
The use of capillary electrophoresis with laser-induced fluorescence detection (CE-LIF) for the analysis of microdialysate samples from the periaqueductal grey matter (PAG) of freely moving rats is described. By employing 3-(4-carboxybenzoyl)-2-quinoline-carboxaldehyde (CBQCA) as a derivatization agent, we simultaneously monitored the concentrations of 8 amino acids (arginine, glutamine, valine, gamma-amino-n-butyric acid (GABA), alanine, glycine, glutamate, and aspartate), with nanomolar and subnanomolar detection limits. Two of the amino acids (GABA and glutamate) were analysed in parallel by conventional high-performance liquid chromatography (HPLC) in order to directly compare the two analytical methods. Other CE methods for analysis of microdialysate have been previously described, and this improved method offers greater sensitivity, ease of use, and the possibility to monitor several amino acids simultaneously. By using this technique together with an optimised form of microdialysis technique, the tiny sample consumption and the improved detection limits permit the detection of fast and transient transmitter changes.
A forgotten epidemic that changed medicine: measles in the US Army, 1917-18.
Morens, David M; Taubenberger, Jeffery K
2015-07-01
A US army-wide measles outbreak in 1917-18 resulted in more than 95,000 cases and more than 3000 deaths. An outbreak investigation implicated measles and streptococcal co-infections in most deaths, and also characterised a parallel epidemic of primary streptococcal pneumonia in soldiers without measles. For the first time, the natural history and pathogenesis of these diseases was able to be well characterised by a broad-interdisciplinary research effort with hundreds of military and civilian physicians and scientists representing disciplines such as internal medicine, pathology, microbiology, radiology, surgery, preventive medicine, and rehabilitation medicine. A clear conceptualisation of bronchopneumonia resulting from viral-bacterial interactions between pathogens was developed, and prevention and treatment approaches were developed and optimised in real time. These approaches were used in the 1918 influenza pandemic, which began as the measles epidemic waned. The outbreak findings remain relevant to the understanding and medical management of severe pneumonia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Environmental Factors Associated with Success Rates of Australian Stock Herding Dogs
Arnott, Elizabeth R.; Early, Jonathan B.; Wade, Claire M.; McGreevy, Paul D.
2014-01-01
This study investigated the current management practices associated with stock herding dogs on Australian farms. A parallel goal was to determine whether these practices and the characteristics of the dog handlers were associated with success rates. Success rate refers to the proportion of dogs acquired by the farmer that were retained as working dogs. Data on a total of 4,027 dogs were obtained through The Farm Dog Survey which gathered information from 812 herding dog owners around Australia. Using logistic regression, significant associations were identified between success rate and seven variables: dog breed, housing method, trial participation, age of the dog at acquisition, electric collar use, hypothetical maximum treatment expenditure and the conscientiousness score of the owner's personality. These findings serve as a guide to direct further research into ways of optimising herding dog performance and welfare. They emphasise the importance of not only examining the genetic predispositions of the working dog but also the impact the handler can have on a dog's success in the workplace. PMID:25136828
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
Is ICRP guidance on the use of reference levels consistent?
Hedemann-Jensen, Per; McEwan, Andrew C
2011-12-01
In ICRP 103, which has replaced ICRP 60, it is stated that no fundamental changes have been introduced compared with ICRP 60. This is true except that the application of reference levels in emergency and existing exposure situations seems to be applied inconsistently, and also in the related publications ICRP 109 and ICRP 111. ICRP 103 emphasises that focus should be on the residual doses after the implementation of protection strategies in emergency and existing exposure situations. If possible, the result of an optimised protection strategy should bring the residual dose below the reference level. Thus the reference level represents the maximum acceptable residual dose after an optimised protection strategy has been implemented. It is not an 'off-the-shelf item' that can be set free of the prevailing situation. It should be determined as part of the process of optimising the protection strategy. If not, protection would be sub-optimised. However, in ICRP 103 some inconsistent concepts have been introduced, e.g. in paragraph 279 which states: 'All exposures above or below the reference level should be subject to optimisation of protection, and particular attention should be given to exposures above the reference level'. If, in fact, all exposures above and below reference levels are subject to the process of optimisation, reference levels appear superfluous. It could be considered that if optimisation of protection below a fixed reference level is necessary, then the reference level has been set too high at the outset. Up until the last phase of the preparation of ICRP 103 the concept of a dose constraint was recommended to constrain the optimisation of protection in all types of exposure situations. In the final phase, the term 'dose constraint' was changed to 'reference level' for emergency and existing exposure situations. However, it seems as if in ICRP 103 it was not fully recognised that dose constraints and reference levels are conceptually different. The use of reference levels in radiological protection is reviewed. It is concluded that the recommendations in ICRP 103 and related ICRP publications seem to be inconsistent regarding the use of reference levels in existing and emergency exposure situations.
NASA Astrophysics Data System (ADS)
Korabel, Vasily; She, Jun; Huess, Vibeke; Woge Nielsen, Jacob; Murawsky, Jens; Nerger, Lars
2017-04-01
The potential of an efficient data assimilation (DA) scheme to improve model forecast skill was successfully demonstrated by many operational centres around the world. The Baltic-North Sea region is one of the most heavily monitored seas. Ferryboxes, buoys, ADCP moorings, shallow water Argo floats, and research vessels are providing more and more near-real time observations. Coastal altimetry has now providing increasing amount of high resolution sea level observations, which will be significantly expanded by the launch of SWOT satellite in next years. This will turn operational DA into a valuable tool for improving forecast quality in the region. This motivated us to focus on advancing DA for the Baltic Monitoring and Forecasting Centre (BAL MFC) in order to create a common framework for operational data assimilation in the Baltic Sea. We have implemented HBM-PDAF system based on the Parallel Data Assimilation Framework (PDAF), a highly versatile and optimised parallel suit with a choice of sequential schemes originally developed at AWI, and a hydrodynamic HIROMB-BOOS Model (HBM). At initial phase, only the satellite Sea Surface Temperature (SST) Level 3 data has been assimilated. Several related aspects are discussed, including improvements of the forecast quality for both surface and subsurface fields, the estimation of ensemble-based forecast error covariance, as well as possibilities of assimilating new types of observations, such as in-situ salinity and temperature profiles, coastal altimetry, and ice concentration.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.
Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir
2014-01-01
Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
On the dynamic rounding-off in analogue and RF optimal circuit sizing
NASA Astrophysics Data System (ADS)
Kotti, Mouna; Fakhfakh, Mourad; Fino, Maria Helena
2014-04-01
Frequently used approaches to solve discrete multivariable optimisation problems consist of computing solutions using a continuous optimisation technique. Then, using heuristics, the variables are rounded-off to their nearest available discrete values to obtain a discrete solution. Indeed, in many engineering problems, and particularly in analogue circuit design, component values, such as the geometric dimensions of the transistors, the number of fingers in an integrated capacitor or the number of turns in an integrated inductor, cannot be chosen arbitrarily since they have to obey to some technology sizing constraints. However, rounding-off the variables values a posteriori and can lead to infeasible solutions (solutions that are located too close to the feasible solution frontier) or degradation of the obtained results (expulsion from the neighbourhood of a 'sharp' optimum) depending on how the added perturbation affects the solution. Discrete optimisation techniques, such as the dynamic rounding-off technique (DRO) are, therefore, needed to overcome the previously mentioned situation. In this paper, we deal with an improvement of the DRO technique. We propose a particle swarm optimisation (PSO)-based DRO technique, and we show, via some analog and RF-examples, the necessity to implement such a routine into continuous optimisation algorithms.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Optimisation of the Management of Higher Activity Waste in the UK - 13537
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Ciara; Buckley, Matthew
2013-07-01
The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin
2007-05-09
A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.
Optimisation of the supercritical extraction of toxic elements in fish oil.
Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M
2014-01-01
This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Evolving aerodynamic airfoils for wind turbines through a genetic algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI
2017-01-01
Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.
Exemples d’utilisation des techniques d’optimisation en calcul de structures de reacteurs
2003-03-01
34~ optimisation g~om~trique (architecture fig~e) A la difference du secteur automobile et des avionneurs, la plupart des composants des r~acteurs n...utilise des lois de comportement mat~riaux non lin~aires ainsi que des hypotheses de grands d~placements. Ltude d’optimisation consiste ý minimiser...un disque simple et d~cid6 de s~lectionner trois param~tes qui influent sur la rupture : 1paisseur de la toile du disque ElI, la hauteur L3 et la
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Cultural-based particle swarm for dynamic optimisation problems
NASA Astrophysics Data System (ADS)
Daneshyari, Moayed; Yen, Gary G.
2012-07-01
Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.
Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N
2016-07-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
NASA Astrophysics Data System (ADS)
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-23
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
NASA Astrophysics Data System (ADS)
Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh
2016-09-01
In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
NASA Astrophysics Data System (ADS)
Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.
2018-02-01
The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-01-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Grady, A.; Makarigakis, A.; Gersonius, B.
2015-09-01
This paper investigates how to optimise decentralisation for effective disaster risk reduction (DRR) in developing states. There is currently limited literature on empirical analysis of decentralisation for DRR. This paper evaluates decentralised governance for DRR in the case study of Indonesia and provides recommendations for its optimisation. Wider implications are drawn to optimise decentralisation for DRR in developing states more generally. A framework to evaluate the institutional and policy setting was developed which necessitated the use of a gap analysis, desk study and field investigation. Key challenges to decentralised DRR include capacity gaps at lower levels, low compliance with legislation, disconnected policies, issues in communication and coordination and inadequate resourcing. DRR authorities should lead coordination and advocacy on DRR. Sustainable multistakeholder platforms and civil society organisations should fill the capacity gap at lower levels. Dedicated and regulated resources for DRR should be compulsory.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation
Zografos, K.; Oliveira, M. S. N.
2016-01-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
Coil optimisation for transcranial magnetic stimulation in realistic head geometry.
Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J
Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool
NASA Astrophysics Data System (ADS)
Helle, K. B.; Müller, T. O.; Astrup, P.; Dyve, J. E.
2014-05-01
Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often chosen using regular grids or according to administrative constraints. Nowadays, however, the choice can be based on more realistic risk assessment, as it is possible to simulate potential radioactive plumes. To support sensor planning, we developed the DETECT Optimisation Tool (DOT) within the scope of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64 source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given monitoring network for early detection of radioactive plumes or for the creation of dose maps. The DOT is implemented as a stand-alone easy-to-use JAVA-based application with a graphical user interface and an R backend. Users can run evaluations and optimisations, and display, store and download the results. The DOT runs on a server and can be accessed via common web browsers; it can also be installed locally.
Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J
2017-01-01
Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.
Saikia, Sangeeta; Mahnot, Nikhil Kumar; Mahanta, Charu Lata
2015-03-15
Optimised of the extraction of polyphenol from star fruit (Averrhoa carambola) pomace using response surface methodology was carried out. Two variables viz. temperature (°C) and ethanol concentration (%) with 5 levels (-1.414, -1, 0, +1 and +1.414) were used to design the optimisation model using central composite rotatable design where, -1.414 and +1.414 refer to axial values, -1 and +1 mean factorial points and 0 refers to centre point of the design. The two variables, temperature of 40°C and ethanol concentration of 65% were the optimised conditions for the response variables of total phenolic content, ferric reducing antioxidant capacity and 2,2-diphenyl-1-picrylhydrazyl scavenging activity. The reverse phase-high pressure liquid chromatography chromatogram of the polyphenol extract showed eight phenolic acids and ascorbic acid. The extract was then encapsulated with maltodextrin (⩽ DE 20) by spray and freeze drying methods at three different concentrations. Highest encapsulating efficiency was obtained in freeze dried encapsulates (78-97%). The obtained optimised model could be used for polyphenol extraction from star fruit pomace and microencapsulates can be incorporated in different food systems to enhance their antioxidant property. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of power gating in different hierarchical levels of 2MB cache, considering variation
NASA Astrophysics Data System (ADS)
Jafari, Mohsen; Imani, Mohsen; Fathipour, Morteza
2015-09-01
This article reintroduces power gating technique in different hierarchical levels of static random-access memory (SRAM) design including cell, row, bank and entire cache memory in 16 nm Fin field effect transistor. Different structures of SRAM cells such as 6T, 8T, 9T and 10T are used in design of 2MB cache memory. The power reduction of the entire cache memory employing cell-level optimisation is 99.7% with the expense of area and other stability overheads. The power saving of the cell-level optimisation is 3× (1.2×) higher than power gating in cache (bank) level due to its superior selectivity. The access delay times are allowed to increase by 4% in the same energy delay product to achieve the best power reduction for each supply voltages and optimisation levels. The results show the row-level power gating is the best for optimising the power of the entire cache with lowest drawbacks. Comparisons of cells show that the cells whose bodies have higher power consumption are the best candidates for power gating technique in row-level optimisation. The technique has the lowest percentage of saving in minimum energy point (MEP) of the design. The power gating also improves the variation of power in all structures by at least 70%.
Functional Programming with C++ Template Metaprograms
NASA Astrophysics Data System (ADS)
Porkoláb, Zoltán
Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.
Pandey, Sonia; Swamy, S M Vijayendra; Gupta, Arti; Koli, Akshay; Patel, Swagat; Maulvi, Furqan; Vyas, Bhavin
2018-04-29
To optimise the Eudragit/Surelease ® -coated pH-sensitive pellets for controlled and target drug delivery to the colon tissue and to avoid frequent high dosing and associated side effects which restrict its use in the colorectal-cancer therapy. The pellets were prepared using extrusion-spheronisation technique. Box-Behnken and 3 2 full factorial designs were applied to optimise the process parameters [extruder sieve size, spheroniser-speed, and spheroniser-time] and the coating levels [%w/v of Eudragit S100/Eudragit-L100 and Surelease ® ], respectively, to achieve the smooth optimised size pellets with sustained drug delivery without prior drug release in upper gastrointestinal tract (GIT). The design proposed the optimised batch by selecting independent variables at; extruder sieve size (X 1 = 1 mm), spheroniser speed (X 2 = 900 revolutions per minute, rpm), and spheroniser time (X 3 = 15 min) to achieve pellet size of 0.96 mm, aspect ratio of 0.98, and roundness 97.42%. The 16%w/v coating strength of Surelease ® and 13%w/v coating strength of Eudragit showed pH-dependent sustained release up to 22.35 h (t 99% ). The organ distribution study showed the absence of the drug in the upper part of GIT tissue and the presence of high level of capecitabine in the caecum and colon tissue. Thus, the presence of Eudragit coat prevent the release of drug in stomach and the inner Surelease ® coat showed sustained drug release in the colon tissue. The study demonstrates the potential of optimised Eudragit/Surelease ® -coated capecitabine-pellets for effective colon-targeted delivery system to avoid frequent high dosing and associated systemic side effects of drug.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
Optimisation of warpage on thin shell part by using particle swarm optimisation (PSO)
NASA Astrophysics Data System (ADS)
Norshahira, R.; Shayfull, Z.; Nasir, S. M.; Saad, S. M. Sazli; Fathullah, M.
2017-09-01
As the product nowadays moving towards thinner design, causing the production of the plastic product facing a lot of difficulties. This is due to the higher possibilities of defects occur as the thickness of the wall gets thinner. Demand for technique in reducing the defects increasing due to this factor. These defects has seen to be occur due to several factors in injection moulding process. In the study a Moldflow software was used in simulating the injection moulding process. While RSM is used in producing the mathematical model to be used as the input fitness function for the Matlab software. Particle Swarm Optimisation (PSO) technique is used in optimising the processing condition to reduce the amount of shrinkage and warpage of the plastic part. The results shows that there are a warpage reduction of 17.60% in x direction, 18.15% in y direction and 10.25% reduction in z direction respectively. The results shows the reliability of this artificial method in minimising the product warpage.
Hill, Holger
2015-01-01
In a case study, Schaffert and Mattes reported the application of acoustic feedback (sonification) to optimise the time course of boat acceleration. The authors attributed an increased boat speed in the feedback condition to an optimised boat acceleration (mainly during the recovery phase). However, in rowing it is biomechanically impossible to increase the boat speed significantly by reducing the fluctuations in boat acceleration during the rowing cycle. To assess such a, potentially small, optimising effect experimentally, the confounding variables must be controlled very accurately (that is especially the propulsive forces must be kept constant between experimental conditions or the differences in propulsive forces between conditions must be much smaller than the effects on boat speed resulting from an optimised movement pattern). However, this was not controlled adequately by the authors. Instead, the presented boat acceleration data show that the increased boat speed under acoustic feedback was due to increased propulsive forces.
Optimisation of composite bone plates for ulnar transverse fractures.
Chakladar, N D; Harper, L T; Parsons, A J
2016-04-01
Metallic bone plates are commonly used for arm bone fractures where conservative treatment (casts) cannot provide adequate support and compression at the fracture site. These plates, made of stainless steel or titanium alloys, tend to shield stress transfer at the fracture site and delay the bone healing rate. This study investigates the feasibility of adopting advanced composite materials to overcome stress shielding effects by optimising the geometry and mechanical properties of the plate to match more closely to the bone. An ulnar transverse fracture is characterised and finite element techniques are employed to investigate the feasibility of a composite-plated fractured bone construct over a stainless steel equivalent. Numerical models of intact and fractured bones are analysed and the mechanical behaviour is found to agree with experimental data. The mechanical properties are tailored to produce an optimised composite plate, offering a 25% reduction in length and a 70% reduction in mass. The optimised design may help to reduce stress shielding and increase bone healing rates. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine
2018-07-01
This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Galán, María Gimena; Llopart, Emilce Elina; Drago, Silvina Rosa
2018-05-01
The aims were to optimise pearling process of red and white sorghum by assessing the effects of pearling time and grain moisture on endosperm yield and flour ash content and to assess nutrient and anti-nutrient losses produced by pearling different cultivars in optimised conditions. Both variables significantly affected both responses. Losses of ashes (58%), proteins (9.5%), lipids (54.5%), Na (37%), Mg (48.5%) and phenolic compounds (43%) were similar among red and white hybrids. However, losses of P (30% vs. 51%), phytic acid (47% vs. 66%), Fe (22% vs. 55%), Zn (32% vs. 62%), Ca (60% vs. 66%), K (46% vs. 61%) and Cu (51% vs. 71%) were lower for red than white sorghum due to different degree of extraction and distribution of components in the grain. Optimised pearling conditions were extrapolated to other hybrids, indicating these criteria could be applied at industrial level to obtain refined flours with proper quality and good endosperm yields.
Natural Erosion of Sandstone as Shape Optimisation.
Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan
2017-12-11
Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.
Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks
NASA Astrophysics Data System (ADS)
Yang, Chao; Fu, Yuli; Yang, Junjie
2016-07-01
Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.
Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan
2015-05-15
The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Suwannarangsee, Surisa; Bunterngsook, Benjarat; Arnthong, Jantima; Paemanee, Atchara; Thamchaipenet, Arinthip; Eurwilaichitr, Lily; Laosiripojana, Navadol; Champreda, Verawat
2012-09-01
Synergistic enzyme system for the hydrolysis of alkali-pretreated rice straw was optimised based on the synergy of crude fungal enzyme extracts with a commercial cellulase (Celluclast™). Among 13 enzyme extracts, the enzyme preparation from Aspergillus aculeatus BCC 199 exhibited the highest level of synergy with Celluclast™. This synergy was based on the complementary cellulolytic and hemicellulolytic activities of the BCC 199 enzyme extract. A mixture design was used to optimise the ternary enzyme complex based on the synergistic enzyme mixture with Bacillus subtilis expansin. Using the full cubic model, the optimal formulation of the enzyme mixture was predicted to the percentage of Celluclast™: BCC 199: expansin=41.4:37.0:21.6, which produced 769 mg reducing sugar/g biomass using 2.82 FPU/g enzymes. This work demonstrated the use of a systematic approach for the design and optimisation of a synergistic enzyme mixture of fungal enzymes and expansin for lignocellulosic degradation. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselov, D A; Ayusheva, K R; Shashkin, I S
2015-10-31
We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Testnodes: a Lightweight Node-Testing Infrastructure
NASA Astrophysics Data System (ADS)
Fay, R.; Bland, J.
2014-06-01
A key aspect of ensuring optimum cluster reliability and productivity lies in keeping worker nodes in a healthy state. Testnodes is a lightweight node testing solution developed at Liverpool. While Nagios has been used locally for general monitoring of hosts and services, Testnodes is optimised to answer one question: is there any reason this node should not be accepting jobs? This tight focus enables Testnodes to inspect nodes frequently with minimal impact and provide a comprehensive and easily extended check with each inspection. On the server side, Testnodes, implemented in python, interoperates with the Torque batch server to control the nodes production status. Testnodes remotely and in parallel executes client-side test scripts and processes the return codes and output, adjusting the node's online/offline status accordingly to preserve the integrity of the overall batch system. Testnodes reports via log, email and Nagios, allowing a quick overview of node status to be reviewed and specific node issues to be identified and resolved quickly. This presentation will cover testnodes design and implementation, together with the results of its use in production at Liverpool, and future development plans.
Application-specific coarse-grained reconfigurable array: architecture and design methodology
NASA Astrophysics Data System (ADS)
Zhou, Li; Liu, Dongpei; Zhang, Jianfeng; Liu, Hengzhu
2015-06-01
Coarse-grained reconfigurable arrays (CGRAs) have shown potential for application in embedded systems in recent years. Numerous reconfigurable processing elements (PEs) in CGRAs provide flexibility while maintaining high performance by exploring different levels of parallelism. However, a difference remains between the CGRA and the application-specific integrated circuit (ASIC). Some application domains, such as software-defined radios (SDRs), require flexibility with performance demand increases. More effective CGRA architectures are expected to be developed. Customisation of a CGRA according to its application can improve performance and efficiency. This study proposes an application-specific CGRA architecture template composed of generic PEs (GPEs) and special PEs (SPEs). The hardware of the SPE can be customised to accelerate specific computational patterns. An automatic design methodology that includes pattern identification and application-specific function unit generation is also presented. A mapping algorithm based on ant colony optimisation is provided. Experimental results on the SDR target domain show that compared with other ordinary and application-specific reconfigurable architectures, the CGRA generated by the proposed method performs more efficiently for given applications.
Koo, B K; O'Connell, P E
2006-04-01
The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.
Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.
Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K
2012-06-01
Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
Evaluation of a High Throughput Starch Analysis Optimised for Wood
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
2014-01-01
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151
NASA Astrophysics Data System (ADS)
Abdussalam, Ragba Mohamed
Thin-walled cylinders are used extensively in the food packaging and cosmetics industries. The cost of material is a major contributor to the overall cost and so improvements in design and manufacturing processes are always being sought. Shape optimisation provides one method for such improvements. Aluminium aerosol cans are a particular form of thin-walled cylinder with a complex shape consisting of truncated cone top, parallel cylindrical section and inverted dome base. They are manufactured in one piece by a reverse-extrusion process, which produces a vessel with a variable thickness from 0.31 mm in the cylinder up to 1.31 mm in the base for a 53 mm diameter can. During manufacture, packaging and charging, they are subjected to pressure, axial and radial loads and design calculations are generally outside the British and American pressure vessel codes. 'Design-by-test' appears to be the favoured approach. However, a more rigorous approach is needed in order to optimise the designs. Finite element analysis (FEA) is a powerful tool for predicting stress, strain and displacement behaviour of components and structures. FEA is also used extensively to model manufacturing processes. In this study, elastic and elastic-plastic FEA has been used to develop a thorough understanding of the mechanisms of yielding, 'dome reversal' (an inherent safety feature, where the base suffers elastic-plastic buckling at a pressure below the burst pressure) and collapse due to internal pressure loading and how these are affected by geometry. It has also been used to study the buckling behaviour under compressive axial loading. Furthermore, numerical simulations of the extrusion process (in order to investigate the effects of tool geometry, friction coefficient and boundary conditions) have been undertaken. Experimental verification of the buckling and collapse behaviours has also been carried out and there is reasonable agreement between the experimental data and the numerical predictions.
Salabert, Anne-Sophie; Vaysse, Laurence; Beaurain, Marie; Alonso, Mathieu; Arribarat, Germain; Lotterie, Jean-Albert; Loubinoux, Isabelle; Tafani, Mathieu; Payoux, Pierre
2017-01-01
Cell transplantation is an innovative therapeutic approach after brain injury to compensate for tissue damage. To have real-time longitudinal monitoring of intracerebrally grafted cells, we explored the feasibility of a molecular imaging approach using thymidine kinase HSV1-TK gene encoding and [18F]FHBG as a reporter probe to image enzyme expression. A stable neuronal cell line expressing HSV1-TK was developed with an optimised mammalian expression vector to ensure long-term transgene expression. After [18F]FHBG incubation under defined parameters, calibration ranges from 1 X 104 to 3 X 106 Neuro2A-TK cells were analysed by gamma counter or by PET-camera. In parallel, grafting with different quantities of [18F]FHBG prelabelled Neuro2A-TK cells was carried out in a rat brain injury model induced by stereotaxic injection of malonate toxin. Image acquisition of the rats was then performed with PET/CT camera to study the [18F]FHBG signal of transplanted cells in vivo. Under the optimised incubation conditions, [18F]FHBG cell uptake rate was around 2.52%. In-vitro calibration range analysis shows a clear linear correlation between the number of cells and the signal intensity. The PET signal emitted into rat brain correlated well with the number of cells injected and the number of surviving grafted cells was recorded via the in-vitro calibration range. PET/CT acquisitions also allowed validation of the stereotaxic injection procedure. Technique sensitivity was evaluated under 5 X 104 grafted cells in vivo. No [18F]FHBG or [18F]metabolite release was observed showing a stable cell uptake even 2 h post-graft. The development of this kind of approach will allow grafting to be controlled and ensure longitudinal follow-up of cell viability and biodistribution after intracerebral injection.
Rushton, A; Calcutt, A; Heneghan, N; Heap, A; White, L; Calvert, M; Goodwin, P
2016-11-09
There is a lack of high-quality evidence for physiotherapy post lumbar discectomy. Substantial heterogeneity in treatment effects may be explained by variation in quality, administration and components of interventions. An optimised physiotherapy intervention may reduce heterogeneity and improve patient benefit. The objective was to describe, analyse and evaluate an optimised 1:1 physiotherapy outpatient intervention for patients following primary lumbar discectomy, to provide preliminary insights. A descriptive analysis of the intervention embedded within an external pilot and feasibility trial. Two UK spinal centres. Participants aged ≥18; post primary, single level, lumbar discectomy were recruited. The intervention encompassed education, advice, mobility and core stability exercises, progressive exercise, and encouragement of early return to work/activity. Patients received ≤8 sessions for ≤8 weeks, starting 4 weeks post surgery (baseline). Blinded outcome assessment at baseline and 12 weeks (post intervention) included the Roland Morris Disability Questionnaire. STarT Back data were collected at baseline. Statistical analyses summarised participant characteristics and preplanned descriptive analyses. Thematic analysis grouped related data. Twenty-two of 29 allocated participants received the intervention. STarT Back categorised n=16 (55%) participants 'not at low risk'. Physiotherapists identified reasons for caution for 8 (36%) participants, commonly risk of overdoing activity (n=4, 18%). There was no relationship between STarT Back and physiotherapists' evaluation of caution. Physiotherapists identified 154 problems (mean (SD) 5.36 (2.63)). Those 'not at low risk', and/or requiring caution presented with more problems, and required more sessions (mean (SD) 3.14 (1.16)). Patients present differently and therefore require tailored interventions. These differences may be identified using clinical reasoning and outcome data. ISRCTN33808269; post results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
On Optimal Development and Becoming an Optimiser
ERIC Educational Resources Information Center
de Ruyter, Doret J.
2012-01-01
The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who have developed (and are still developing) their…
Energy landscapes for a machine learning application to series data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Andrew J.; Stevenson, Jacob D.; Das, Ritankar
2016-03-28
Methods developed to explore and characterise potential energy landscapes are applied to the corresponding landscapes obtained from optimisation of a cost function in machine learning. We consider neural network predictions for the outcome of local geometry optimisation in a triatomic cluster, where four distinct local minima exist. The accuracy of the predictions is compared for fits using data from single and multiple points in the series of atomic configurations resulting from local geometry optimisation and for alternative neural networks. The machine learning solution landscapes are visualised using disconnectivity graphs, and signatures in the effective heat capacity are analysed in termsmore » of distributions of local minima and their properties.« less
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Discovery and optimisation studies of antimalarial phenotypic hits
Mital, Alka; Murugesan, Dinakaran; Kaiser, Marcel; Yeates, Clive; Gilbert, Ian H.
2015-01-01
There is an urgent need for the development of new antimalarial compounds. As a result of a phenotypic screen, several compounds with potent activity against the parasite Plasmodium falciparum were identified. Characterization of these compounds is discussed, along with approaches to optimise the physicochemical properties. The in vitro antimalarial activity of these compounds against P. falciparum K1 had EC50 values in the range of 0.09–29 μM, and generally good selectivity (typically >100-fold) compared to a mammalian cell line (L6). One example showed no significant activity against a rodent model of malaria, and more work is needed to optimise these compounds. PMID:26408453
Design and optimisation of wheel-rail profiles for adhesion improvement
NASA Astrophysics Data System (ADS)
Liu, B.; Mei, T. X.; Bruni, S.
2016-03-01
This paper describes a study for the optimisation of the wheel profile in the wheel-rail system to increase the overall level of adhesion available at the contact interface, in particular to investigate how the wheel and rail profile combination may be designed to ensure the improved delivery of tractive/braking forces even in poor contact conditions. The research focuses on the geometric combination of both wheel and rail profiles to establish how the contact interface may be optimised to increase the adhesion level, but also to investigate how the change in the property of the contact mechanics at the wheel-rail interface may also lead to changes in the vehicle dynamic behaviour.
Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders
Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa
2014-01-01
The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386
Multiple Criteria Evaluation of Quality and Optimisation of e-Learning System Components
ERIC Educational Resources Information Center
Kurilovas, Eugenijus; Dagiene, Valentina
2010-01-01
The main research object of the paper is investigation and proposal of the comprehensive Learning Object Repositories (LORs) quality evaluation tool suitable for their multiple criteria decision analysis, evaluation and optimisation. Both LORs "internal quality" and "quality in use" evaluation (decision making) criteria are analysed in the paper.…
Optimising Microbial Growth with a Bench-Top Bioreactor
ERIC Educational Resources Information Center
Baker, A. M. R.; Borin, S. L.; Chooi, K. P.; Huang, S. S.; Newgas, A. J. S.; Sodagar, D.; Ziegler, C. A.; Chan, G. H. T.; Walsh, K. A. P.
2006-01-01
The effects of impeller size, agitation and aeration on the rate of yeast growth were investigated using bench-top bioreactors. This exercise, carried out over a six-month period, served as an effective demonstration of the importance of different operating parameters on cell growth and provided a means of determining the optimisation conditions…
ERIC Educational Resources Information Center
Pettinger, Clare; Parsons, Julie M.; Cunningham, Miranda; Withers, Lyndsey; D'Aprano, Gia; Letherby, Gayle; Sutton, Carole; Whiteford, Andrew; Ayres, Richard
2017-01-01
Objective: High levels of social and economic deprivation are apparent in many UK cities, where there is evidence of certain "marginalised" communities suffering disproportionately from poor nutrition, threatening health. Finding ways to engage with these communities is essential to identify strategies to optimise wellbeing and life…
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
Escalated convergent artificial bee colony
NASA Astrophysics Data System (ADS)
Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu
2016-03-01
Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.
Abu, Mary Ladidi; Nooh, Hisham Mohd; Oslan, Siti Nurbaya; Salleh, Abu Bakar
2017-11-10
Pichia guilliermondii was found capable of expressing the recombinant thermostable lipase without methanol under the control of methanol dependent alcohol oxidase 1 promoter (AOXp 1). In this study, statistical approaches were employed for the screening and optimisation of physical conditions for T1 lipase production in P. guilliermondii. The screening of six physical conditions by Plackett-Burman Design has identified pH, inoculum size and incubation time as exerting significant effects on lipase production. These three conditions were further optimised using, Box-Behnken Design of Response Surface Methodology, which predicted an optimum medium comprising pH 6, 24 h incubation time and 2% inoculum size. T1 lipase activity of 2.0 U/mL was produced with a biomass of OD 600 23.0. The process of using RSM for optimisation yielded a 3-fold increase of T1 lipase over medium before optimisation. Therefore, this result has proven that T1 lipase can be produced at a higher yield in P. guilliermondii.
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
NASA Astrophysics Data System (ADS)
Kies, Alexander
2018-02-01
To meet European decarbonisation targets by 2050, the electrification of the transport sector is mandatory. Most electric vehicles rely on lithium-ion batteries, because they have a higher energy/power density and longer life span compared to other practical batteries such as zinc-carbon batteries. Electric vehicles can thus provide energy storage to support the system integration of generation from highly variable renewable sources, such as wind and photovoltaics (PV). However, charging/discharging causes batteries to degradate progressively with reduced capacity. In this study, we investigate the impact of the joint optimisation of arbitrage revenue and battery degradation of electric vehicle batteries in a simplified setting, where historical prices allow for market participation of battery electric vehicle owners. It is shown that the joint optimisation of both leads to stronger gains then the sum of both optimisation strategies and that including battery degradation into the model avoids state of charges close to the maximum at times. It can be concluded that degradation is an important aspect to consider in power system models, which incorporate any kind of lithium-ion battery storage.
Modelling of auctioning mechanism for solar photovoltaic capacity
NASA Astrophysics Data System (ADS)
Poullikkas, Andreas
2016-10-01
In this work, a modified optimisation model for the integration of renewable energy sources for power-generation (RES-E) technologies in power-generation systems on a unit commitment basis is developed. The purpose of the modified optimisation procedure is to account for RES-E capacity auctions for different solar photovoltaic (PV) capacity electricity prices. The optimisation model developed uses a genetic algorithm (GA) technique for the calculation of the required RES-E levy (or green tax) in the electricity bills. Also, the procedure enables the estimation of the level of the adequate (or eligible) feed-in-tariff to be offered to future RES-E systems, which do not participate in the capacity auctioning procedure. In order to demonstrate the applicability of the optimisation procedure developed the case of PV capacity auctioning for commercial systems is examined. The results indicated that the required green tax, in order to promote the use of RES-E technologies, which is charged to the electricity customers through their electricity bills, is reduced with the reduction in the final auctioning price. This has a significant effect related to the reduction of electricity bills.
3D printed fluidics with embedded analytic functionality for automated reaction optimisation
Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D
2017-01-01
Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Ming
2014-12-01
Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.
NASA Astrophysics Data System (ADS)
Dittmar, N.; Haberstroh, Ch.; Hesse, U.; Krzyzowski, M.
2016-04-01
The transfer of liquid helium (LHe) into mobile dewars or transport vessels is a common and unavoidable process at LHe decant stations. During this transfer reasonable amounts of LHe evaporate due to heat leak and pressure drop. Thus generated helium gas needs to be collected and reliquefied which requires a huge amount of electrical energy. Therefore, the design of transfer lines used at LHe decant stations has been optimised to establish a LHe transfer with minor evaporation losses which increases the overall efficiency and capacity of LHe decant stations. This paper presents the experimental results achieved during the thermohydraulic optimisation of a flexible LHe transfer line. An extensive measurement campaign with a set of dedicated transfer lines equipped with pressure and temperature sensors led to unique experimental data of this specific transfer process. The experimental results cover the heat leak, the pressure drop, the transfer rate, the outlet quality, and the cool-down and warm-up behaviour of the examined transfer lines. Based on the obtained results the design of the considered flexible transfer line has been optimised, featuring reduced heat leak and pressure drop.
Synthesis of concentric circular antenna arrays using dragonfly algorithm
NASA Astrophysics Data System (ADS)
Babayigit, B.
2018-05-01
Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
NASA Astrophysics Data System (ADS)
van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan
2017-06-01
Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.
Optimisation of solar synoptic observations
NASA Astrophysics Data System (ADS)
Klvaña, Miroslav; Sobotka, Michal; Švanda, Michal
2012-09-01
The development of instrumental and computer technologies is connected with steadily increasing needs for archiving of large data volumes. The current trend to meet this requirement includes the data compression and growth of storage capacities. This approach, however, has technical and practical limits. A further reduction of the archived data volume can be achieved by means of an optimisation of the archiving that consists in data selection without losing the useful information. We describe a method of optimised archiving of solar images, based on the selection of images that contain a new information. The new information content is evaluated by means of the analysis of changes detected in the images. We present characteristics of different kinds of image changes and divide them into fictitious changes with a disturbing effect and real changes that provide a new information. In block diagrams describing the selection and archiving, we demonstrate the influence of clouds, the recording of images during an active event on the Sun, including a period before the event onset, and the archiving of long-term history of solar activity. The described optimisation technique is not suitable for helioseismology, because it does not conserve the uniform time step in the archived sequence and removes the information about solar oscillations. In case of long-term synoptic observations, the optimised archiving can save a large amount of storage capacities. The actual capacity saving will depend on the setting of the change-detection sensitivity and on the capability to exclude the fictitious changes.
Treatment planning optimisation in proton therapy
McGowan, S E; Burnet, N G; Lomax, A J
2013-01-01
ABSTRACT. The goal of radiotherapy is to achieve uniform target coverage while sparing normal tissue. In proton therapy, the same sources of geometric uncertainty are present as in conventional radiotherapy. However, an important and fundamental difference in proton therapy is that protons have a finite range, highly dependent on the electron density of the material they are traversing, resulting in a steep dose gradient at the distal edge of the Bragg peak. Therefore, an accurate knowledge of the sources and magnitudes of the uncertainties affecting the proton range is essential for producing plans which are robust to these uncertainties. This review describes the current knowledge of the geometric uncertainties and discusses their impact on proton dose plans. The need for patient-specific validation is essential and in cases of complex intensity-modulated proton therapy plans the use of a planning target volume (PTV) may fail to ensure coverage of the target. In cases where a PTV cannot be used, other methods of quantifying plan quality have been investigated. A promising option is to incorporate uncertainties directly into the optimisation algorithm. A further development is the inclusion of robustness into a multicriteria optimisation framework, allowing a multi-objective Pareto optimisation function to balance robustness and conformity. The question remains as to whether adaptive therapy can become an integral part of a proton therapy, to allow re-optimisation during the course of a patient's treatment. The challenge of ensuring that plans are robust to range uncertainties in proton therapy remains, although these methods can provide practical solutions. PMID:23255545
State-Of in Uav Remote Sensing Survey - First Insights Into Applications of Uav Sensing Systems
NASA Astrophysics Data System (ADS)
Aasen, H.
2017-08-01
UAVs are increasingly adapted as remote sensing platforms. Together with specialized sensors, they become powerful sensing systems for environmental monitoring and surveying. Spectral data has great capabilities to the gather information about biophysical and biochemical properties. Still, capturing meaningful spectral data in a reproducible way is not trivial. Since a couple of years small and lightweight spectral sensors, which can be carried on small flexible platforms, have become available. With their adaption in the community, the responsibility to ensure the quality of the data is increasingly shifted from specialized companies and agencies to individual researchers or research teams. Due to the complexity of the data acquisition of spectral data, this poses a challenge for the community and standardized protocols, metadata and best practice procedures are needed to make data intercomparable. In November 2016, the ESSEM COST action Innovative optical Tools for proximal sensing of ecophysiological processes (OPTIMISE; http://optimise.dcs.aber.ac.uk/) held a workshop on best practices for UAV spectral sampling. The objective of this meeting was to trace the way from particle to pixel and identify influences on the data quality / reliability, to figure out how well we are currently doing with spectral sampling from UAVs and how we can improve. Additionally, a survey was designed to be distributed within the community to get an overview over the current practices and raise awareness for the topic. This talk will introduce the approach of the OPTIMISE community towards best practises in UAV spectral sampling and present first results of the survey (http://optimise.dcs.aber.ac.uk/uav-survey/). This contribution briefly introduces the survey and gives some insights into the first results given by the interviewees.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.
Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P
2017-01-01
To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-14
... developed an enhanced technology trading platform (the ``Optimise platform''). To assure a smooth transition... Optimise trading platform and will continue to do so up to the launch of the new technology and during the... tested and is available for the launch. The Exchange believes that it will be less disruptive to members...
ERIC Educational Resources Information Center
Brijlall, Deonarain; Ndlovu, Zanele
2013-01-01
This qualitative case study in a rural school in Umgungundlovu District in KwaZulu-Natal, South Africa, explored Grade 12 learners' mental constructions of mathematical knowledge during engagement with optimisation problems. Ten Grade 12 learners who do pure Mathemat-ics participated, and data were collected through structured activity sheets and…
Optimising the Blended Learning Environment: The Arab Open University Experience
ERIC Educational Resources Information Center
Hamdi, Tahrir; Abu Qudais, Mohammed
2018-01-01
This paper will offer some insights into possible ways to optimise the blended learning environment based on experience with this modality of teaching at Arab Open University/Jordan branch and also by reflecting upon the results of several meta-analytical studies, which have shown blended learning environments to be more effective than their face…
Critical review of membrane bioreactor models--part 2: hydrodynamic and integrated models.
Naessens, W; Maere, T; Ratkovich, N; Vedantam, S; Nopens, I
2012-10-01
Membrane bioreactor technology exists for a couple of decades, but has not yet overwhelmed the market due to some serious drawbacks of which operational cost due to fouling is the major contributor. Knowledge buildup and optimisation for such complex systems can heavily benefit from mathematical modelling. In this paper, the vast literature on hydrodynamic and integrated MBR modelling is critically reviewed. Hydrodynamic models are used at different scales and focus mainly on fouling and only little on system design/optimisation. Integrated models also focus on fouling although the ones including costs are leaning towards optimisation. Trends are discussed, knowledge gaps identified and interesting routes for further research suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimisation of oxygen ion transport in materials for ceramic membrane devices.
Kilner, J A
2007-01-01
Oxygen transport in ceramic oxide materials has received much attention over the past few decades. Much of this interest has stemmed from the desire to construct high temperature electrochemical devices for energy conversion, an example being the solid oxide fuel cell. In order to achieve high performance for these devices, insights are needed in how to achieve optimum performance from the functional components such as the electrolytes and electrodes. This includes the optimisation of oxygen transport through the crystal lattice of electrode and electrolyte materials and across the homogeneous (grain boundary) and heterogeneous interfaces that exist in real devices. Strategies are discussed for the optimisation of these quantities and current problems in the characterisation of interfacial transport are explored.
Hind, Daniel; Parkin, James; Whitworth, Victoria; Rex, Saleema; Young, Tracey; Hampson, Lisa; Sheehan, Jennie; Maguire, Chin; Cantrill, Hannah; Scott, Elaine; Epps, Heather; Main, Marion; Geary, Michelle; McMurchie, Heather; Pallant, Lindsey; Woods, Daniel; Freeman, Jennifer; Lee, Ellen; Eagle, Michelle; Willis, Tracey; Muntoni, Francesco; Baxter, Peter
2017-05-01
Duchenne muscular dystrophy (DMD) is a rare disease that causes the progressive loss of motor abilities such as walking. Standard treatment includes physiotherapy. No trial has evaluated whether or not adding aquatic therapy (AT) to land-based therapy (LBT) exercises helps to keep muscles strong and children independent. To assess the feasibility of recruiting boys with DMD to a randomised trial evaluating AT (primary objective) and to collect data from them; to assess how, and how well, the intervention and trial procedures work. Parallel-group, single-blind, randomised pilot trial with nested qualitative research. Six paediatric neuromuscular units. Children with DMD aged 7-16 years, established on corticosteroids, with a North Star Ambulatory Assessment (NSAA) score of 8-34 and able to complete a 10-m walk without aids/assistance. Exclusions: > 20% variation between baseline screens 4 weeks apart and contraindications. Participants were allocated on a 1 : 1 ratio to (1) optimised, manualised LBT (prescribed by specialist neuromuscular physiotherapists) or (2) the same plus manualised AT (30 minutes, twice weekly for 6 months: active assisted and/or passive stretching regime; simulated or real functional activities; submaximal exercise). Semistructured interviews with participants, parents ( n = 8) and professionals ( n = 8) were analysed using Framework analysis. An independent rater reviewed patient records to determine the extent to which treatment was optimised. A cost-impact analysis was performed. Quantitative and qualitative data were mixed using a triangulation exercise. Feasibility of recruiting 40 participants in 6 months, participant and therapist views on the acceptability of the intervention and research protocols, clinical outcomes including NSAA, independent assessment of treatment optimisation and intervention costs. Over 6 months, 348 children were screened - most lived too far from centres or were enrolled in other trials. Twelve (30% of target) were randomised to AT ( n = 8) or control ( n = 4). People in the AT ( n = 8) and control ( n = 2: attrition because of parental report) arms contributed outcome data. The mean change in NSAA score at 6 months was -5.5 [standard deviation (SD) 7.8] for LBT and -2.8 (SD 4.1) in the AT arm. One boy suffered pain and fatigue after AT, which resolved the same day. Physiotherapists and parents valued AT and believed that it should be delivered in community settings. The independent rater considered AT optimised for three out of eight children, with other children given programmes that were too extensive and insufficiently focused. The estimated NHS costs of 6-month service were between £1970 and £2734 per patient. The focus on delivery in hospitals limits generalisability. Neither a full-scale frequentist randomised controlled trial (RCT) recruiting in the UK alone nor a twice-weekly open-ended AT course delivered at tertiary centres is feasible. Further intervention development research is needed to identify how community-based pools can be accessed, and how families can link with each other and community physiotherapists to access tailored AT programmes guided by highly specialised physiotherapists. Bayesian RCTs may be feasible; otherwise, time series designs are recommended. Current Controlled Trials ISRCTN41002956. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment ; Vol. 21, No. 27. See the NIHR Journals Library website for further project information.
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng
2017-04-01
A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.
Gorjanc, Gregor; Hickey, John M
2018-05-02
AlphaMate is a flexible program that optimises selection, maintenance of genetic diversity, and mate allocation in breeding programs. It can be used in animal and cross- and self-pollinating plant populations. These populations can be subject to selective breeding or conservation management. The problem is formulated as a multi-objective optimisation of a valid mating plan that is solved with an evolutionary algorithm. A valid mating plan is defined by a combination of mating constraints (the number of matings, the maximal number of parents, the minimal/equal/maximal number of contributions per parent, or allowance for selfing) that are gender specific or generic. The optimisation can maximize genetic gain, minimize group coancestry, minimize inbreeding of individual matings, or maximize genetic gain for a given increase in group coancestry or inbreeding. Users provide a list of candidate individuals with associated gender and selection criteria information (if applicable) and coancestry matrix. Selection criteria and coancestry matrix can be based on pedigree or genome-wide markers. Additional individual or mating specific information can be included to enrich optimisation objectives. An example of rapid recurrent genomic selection in wheat demonstrates how AlphaMate can double the efficiency of converting genetic diversity into genetic gain compared to truncation selection. Another example demonstrates the use of genome editing to expand the gain-diversity frontier. Executable versions of AlphaMate for Windows, Mac, and Linux platforms are available at http://www.AlphaGenes.roslin.ed.ac.uk/AlphaMate. gregor.gorjanc@roslin.ed.ack.uk.
Analysis of the car body stability performance after coupler jack-knifing during braking
NASA Astrophysics Data System (ADS)
Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng
2018-06-01
This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.
Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina
2016-12-01
Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.
Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E
2018-03-01
Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.
ERIC Educational Resources Information Center
Oelke, Nelly; Wilhelm, Amanda; Jackson, Karen
2016-01-01
The role of nurses in primary care is poorly understood and many are not working to their full scope of practice. Building on previous research, this knowledge translation (KT) project's aim was to facilitate nurses' capacity to optimise their practice in these settings. A Summit engaging Alberta stakeholders in a deliberative discussion was the…
Quadratic Optimisation with One Quadratic Equality Constraint
2010-06-01
This report presents a theoretical framework for minimising a quadratic objective function subject to a quadratic equality constraint. The first part of the report gives a detailed algorithm which computes the global minimiser without calling special nonlinear optimisation solvers. The second part of the report shows how the developed theory can be applied to solve the time of arrival geolocation problem.
Optimising fuel treatments over time and space
Woodam Chung; Greg Jones; Kurt Krueger; Jody Bramel; Marco Contreras
2013-01-01
Fuel treatments have been widely used as a tool to reduce catastrophic wildland fire risks in many forests around the world. However, it is a challenging task for forest managers to prioritise where, when and how to implement fuel treatments across a large forest landscape. In this study, an optimisation model was developed for long-term fuel management decisions at a...
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Fox-7 for Insensitive Boosters
2010-08-01
cavitation , and therefore nucleation, to occur at each frequency. As well as producing ultrasound at different frequencies, the method of delivery of...processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology to improve booster formulations, and results from these...7 booster formulations. Also included are particle processing techniques using ultrasound , designed to optimise FOX-7 crystal size and morphology
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Healy, Aoife; Gerth, Peter; Chockalingam, Nachiappan
2017-08-01
Current selection of cushioning materials for therapeutic footwear and orthoses is based on empirical and anecdotal evidence. The aim of this investigation is to assess the biomechanical properties of carefully selected cushioning materials and to establish the basis for patient-specific material optimisation. For this purpose, bespoke cushioning materials with qualitatively similar mechanical behaviour but different stiffness were produced. Healthy volunteers were asked to stand and walk on materials with varying stiffness and their capacity for pressure reduction was assessed. Mechanical testing using a surrogate heel model was employed to investigate the effect of loading on optimum stiffness. Results indicated that optimising the stiffness of cushioning materials improved pressure reduction during standing and walking by at least 16 and 19% respectively. Moreover, the optimum stiffness was strongly correlated to body mass (BM) and body mass index (BMI), with stiffer materials needed in the case of people with higher BM or BMI. Mechanical testing confirmed that optimum stiffness increases with the magnitude of compressive loading. For the first time, this study provides quantitative data to support the importance of stiffness optimisation in cushioning materials and sets the basis for methods to inform optimum material selection in the clinic.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
NASA Astrophysics Data System (ADS)
Lehtola, Susi; Parkhill, John; Head-Gordon, Martin
2018-03-01
We describe the implementation of orbital optimisation for the models in the perfect pairing hierarchy. Orbital optimisation, which is generally necessary to obtain reliable results, is pursued at perfect pairing (PP) and perfect quadruples (PQ) levels of theory for applications on linear polyacenes, which are believed to exhibit strong correlation in the π space. While local minima and σ-π symmetry breaking solutions were found for PP orbitals, no such problems were encountered for PQ orbitals. The PQ orbitals are used for single-point calculations at PP, PQ and perfect hextuples (PH) levels of theory, both only in the π subspace, as well as in the full σπ valence space. It is numerically demonstrated that the inclusion of single excitations is necessary also when optimised orbitals are used. PH is found to yield good agreement with previously published density matrix renormalisation group data in the π space, capturing over 95% of the correlation energy. Full-valence calculations made possible by our novel, efficient code reveal that strong correlations are weaker when larger basis sets or active spaces are employed than in previous calculations. The largest full-valence PH calculations presented correspond to a (192e,192o) problem.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping
NASA Astrophysics Data System (ADS)
Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan
2016-12-01
This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Honeybee economics: optimisation of foraging in a variable world.
Stabentheiner, Anton; Kovac, Helmut
2016-06-20
In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R.; Ellis, Heidi JC; Hinman, M. Lee; Vyas, Jay; Gryk, Michael R.
2012-01-01
Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org). PMID:25328913
Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R; Ellis, Heidi Jc; Hinman, M Lee; Vyas, Jay; Gryk, Michael R
2012-01-01
Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org).
An improved design method based on polyphase components for digital FIR filters
NASA Astrophysics Data System (ADS)
Kumar, A.; Kuldeep, B.; Singh, G. K.; Lee, Heung No
2017-11-01
This paper presents an efficient design of digital finite impulse response (FIR) filter, based on polyphase components and swarm optimisation techniques (SOTs). For this purpose, the design problem is formulated as mean square error between the actual response and ideal response in frequency domain using polyphase components of a prototype filter. To achieve more precise frequency response at some specified frequency, fractional derivative constraints (FDCs) have been applied, and optimal FDCs are computed using SOTs such as cuckoo search and modified cuckoo search algorithms. A comparative study of well-proved swarm optimisation, called particle swarm optimisation and artificial bee colony algorithm is made. The excellence of proposed method is evaluated using several important attributes of a filter. Comparative study evidences the excellence of proposed method for effective design of FIR filter.
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
2016-10-31
statistical physics. Sec. IV includes several examples of the application of the stochastic method, including matching of a shape to a fixed design, and...an important part of any future application of this method. Second, re-initialization of the level set can lead to small but significant movements of...of engineering design problems [6, 17]. However, many of the relevant applications involve non-convex optimisation problems with multiple locally
Sentient Structures: Optimising Sensor Layouts for Direct Measurement of Discrete Variables
2008-11-01
1 Sentient Structures Optimising Sensor Layouts for Direct Measurement of Discrete Variables Report to US Air Force...TITLE AND SUBTITLE Sentient Structures 5a. CONTRACT NUMBER FA48690714045 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Donald Price...optimal sensor placements is an important requirement for the development of sentient structures. An optimal sensor layout is attained when a limited
An Optimisation Procedure for the Conceptual Analysis of Different Aerodynamic Configurations
2000-06-01
G. Lombardi, G. Mengali Department of Aerospace Engineering , University of Pisa Via Diotisalvi 2, 56126 PISA, Italy F. Beux Scuola Normale Superiore...obtain engines , gears and various systems; their weights and centre configurations with improved performances with respect to a of gravity positions...design parameters have been arranged for The optimisation process includes the following steps: cruise: payload, velocity, range, cruise height, engine
Yu Wei; Erin J. Belval; Matthew P. Thompson; Dave E. Calkin; Crystal S. Stonesifer
2016-01-01
Sharing fire engines and crews between fire suppression dispatch zones may help improve the utilisation of fire suppression resources. Using the Resource Ordering and Status System, the Predictive Servicesâ Fire Potential Outlooks and the Rocky Mountain Region Preparedness Levels from 2010 to 2013, we tested a simulation and optimisation procedure to transfer crews and...
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A
2016-11-01
A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Hydrology of an abandoned coal-mining area near McCurtain, Haskell County, Oklahoma
Slack, L.J.
1983-01-01
Water quality was investigated from October 1980 to May 1983 in an area of abandoned coal mines in Haskell county, Oklahoma. Bedrock in the area is shale, siltstone, sandstone, and the McAlester (Stigler) and Hartshorne coals of the McAlester Formation and Hartshorne Sandstone of Pennsylvanian age. The two coal beds, upper and lower Hartshorne, associated with the Hartshorne Sandstone converge or are separated by a few feet or less of bony coal or shale in the McCurtain area. Many small faults cut the Hartshorne coal in all the McCurtain-area mines. The main avenues of water entry to and movement through the bedrock are the exposed bedding-plane openings between layers of sandstone, partings between laminae of shale, fractures and joints developed during folding and faulting laminae of shale, fractures and joints developed during folding and faulting of the brittle rocks, and openings caused by surface mining--the overburden being shattered and broken to form spoil. Water-table conditions exist in bedrock and spoil in the area. Mine pond water is in direct hydraulic connections with water in the spoil piles and the underlying Hartshorne Sandstone. Sulfate is the best indicator of the presence of coal-mine drainage in both surface and ground water in the Oklahoma coal field. Median sulfate concentrations for four sites on Mule Creek ranged from 26 to 260 milligrams per liter. Median sulfate concentrations increased with increased drainage from unreclaimed mined areas. The median sulfate concentration in Mule Creek where it drains the reclaimed area is less than one-third of that at the next site downstream where the stream begins to drain abandoned (unreclaimed) mine lands. Water from Mule Creek predominantly is a sodium sulfate type. Maximum and median values for specific conductance and concentrations of calcium, magnesium, sodium, sulfate, chloride, dissolved solids, and alkalinity increase as Mule Creek flows downstream and drains increasing areas of abandoned (unreclaimed) mining lands. Constituent concentrations in Mule Creek, except those for dissolved solids, iron, manganese, and sulfate, generally do not exceed drinking-water limits. Reclamation likely would result in decreased concentrations of dissolved solids, calcium, magnesium, sodium, sulfate, and alkalinity in Mule Creek in the vicinity of the reclaimed area. Ground water in the area is moderately hard to very hard alkaline water with a median pH of 7.2 to 7.6. It predominately is a sodium sulfate type and, except for dissolved solids, iron manganese, and sulfate, constituent concentrations generally do not exceed drinking-water limits. Ground-water quality would likely be unchanged by reclamation. The quality of water in the two mine ponds is quite similar to that of the shallow ground water in the area. Constituents in water from both ponds generally do not exceed drinking-water limits and the water quality is unlikely to be changed by reclamation in the area.
Protecting complex infrastructures against multiple strategic attackers
NASA Astrophysics Data System (ADS)
Hausken, Kjell
2011-01-01
Infrastructures are analysed subject to defence by a strategic defender and attack by multiple strategic attackers. A framework is developed where each agent determines how much to invest in defending versus attacking each of multiple targets. A target can have economic, human and symbolic values, which generally vary across agents. Investment expenditure functions for each agent can be linear in the investment effort, concave, convex, logistic, can increase incrementally, or can be subject to budget constraints. Contest success functions (e.g., ratio and difference forms) determine the probability of a successful attack on each target, dependent on the relative investments of the defender and attackers on each target, and on characteristics of the contest. Targets can be in parallel, in series, interlinked, interdependent or independent. The defender minimises the expected damage plus the defence expenditures. Each attacker maximises the expected damage minus the attack expenditures. The number of free choice variables equals the number of agents times the number of targets, or lower if there are budget constraints. Each agent is interested in how his investments vary across the targets, and the impact on his utilities. Alternative optimisation programmes are discussed, together with repeated games, dynamic games and incomplete information. An example is provided for illustration.
High Productivity DRIE solutions for 3D-SiP and MEMS Volume Manufacturing
NASA Astrophysics Data System (ADS)
Puech, M.; Thevenoud, JM; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, JM
2006-04-01
Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimised to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters has resulted in ultra high silicon etch rates, with unrivalled uniformity and repeatability leading to excellent process. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer heads and Silicon microphones.
Ascher, Benjamin; Fanchon, Chantal; Kanoun-Copy, Leila; Bouloc, Anne; Benech, Florence
2012-10-01
A monocentre double-blind two parallel group clinical study was conducted to assess whether a new skincare regimen containing retinol, adenosine and hyaluronic acid, applied after the injection of botulinum toxin A to the glabellar area, provided a beneficial effect. Standardised photographs acquired using LifeViz cameras and zoomed pictures of the glabella and of the crow's feet areas were analysed with automatic well-defined procedures. Perceived efficacy and tolerance were also analysed by comparison between the two groups. A beneficial effect versus placebo-treated group was proven in the group having topically applied the new skincare regimen for 2 months following botulinum toxin A injection with no touch up after 1 month. 3D image analysis showed more rapid results on D10 and enhanced efficacy on M2. Moreover, a beneficial effect independent of injection was measured in the crow's feet area, and analysis of the self-evaluation questionnaire showed enhanced efficacy perceived by the volunteers. A specially developed skincare regimen applied immediately after botulinum toxin A injection completes the beneficial effect of the injection on the glabellar area and offers clinical benefits in fine lines, wrinkles and smoothness on the whole face.
ACTS: from ATLAS software towards a common track reconstruction software
NASA Astrophysics Data System (ADS)
Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration
2017-10-01
Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.
Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong
2013-08-01
Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.
Optimization of vehicle-trailer connection systems
NASA Astrophysics Data System (ADS)
Sorge, F.
2016-09-01
The three main requirements of a vehicle-trailer connection system are: en route stability, over- or under-steering restraint, minimum off-tracking along curved path. Linking the two units by four-bar trapeziums, wider stability margins may be attained in comparison with the conventional pintle-hitch for both instability types, divergent or oscillating. The stability maps are traced applying the Hurwitz method or the direct analysis of the characteristic equation at the instability threshold. Several types of four-bar linkages may be quickly tested, with the drawbars converging towards the trailer or the towing unit. The latter configuration appears preferable in terms of self-stability and may yield high critical speeds by optimising the geometrical and physical properties. Nevertheless, the system stability may be improved in general by additional vibration dampers in parallel with the connection linkage. Moreover, the four-bar connection may produce significant corrections of the under-steering or over-steering behaviour of the vehicle-train after a steering command from the driver. The off- tracking along the curved paths may be also optimized or kept inside prefixed margins of acceptableness. Activating electronic stability systems if necessary, fair results are obtainable for both the steering conduct and the off-tracking.
Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain
2014-01-01
To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.
Medical imaging dose optimisation from ground up: expert opinion of an international summit.
Samei, Ehsan; Järvinen, Hannu; Kortesniemi, Mika; Simantirakis, George; Goh, Charles; Wallace, Anthony; Vano, Eliseo; Bejan, Adrian; Rehani, Madan; Vassileva, Jenia
2018-05-17
As in any medical intervention, there is either a known or an anticipated benefit to the patient from undergoing a medical imaging procedure. This benefit is generally significant, as demonstrated by the manner in which medical imaging has transformed clinical medicine. At the same time, when it comes to imaging that deploys ionising radiation, there is a potential associated risk from radiation. Radiation risk has been recognised as a key liability in the practice of medical imaging, creating a motivation for radiation dose optimisation. The level of radiation dose and risk in imaging varies but is generally low. Thus, from the epidemiological perspective, this makes the estimation of the precise level of associated risk highly uncertain. However, in spite of the low magnitude and high uncertainty of this risk, its possibility cannot easily be refuted. Therefore, given the moral obligation of healthcare providers, 'first, do no harm,' there is an ethical obligation to mitigate this risk. Precisely how to achieve this goal scientifically and practically within a coherent system has been an open question. To address this need, in 2016, the International Atomic Energy Agency (IAEA) organised a summit to clarify the role of Diagnostic Reference Levels to optimise imaging dose, summarised into an initial report (Järvinen et al 2017 Journal of Medical Imaging 4 031214). Through a consensus building exercise, the summit further concluded that the imaging optimisation goal goes beyond dose alone, and should include image quality as a means to include both the benefit and the safety of the exam. The present, second report details the deliberation of the summit on imaging optimisation.
NASA Astrophysics Data System (ADS)
Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.
This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Omar, J; Boix, A; Kerckhove, G; von Holst, C
2016-12-01
Titanium dioxide (TiO 2 ) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO 2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CF exp ) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min -1 ; DF, 0.4 ml min -1 ; Ft, 5 min; and CF exp , 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated.
Omar, J.; Boix, A.; Kerckhove, G.; von Holst, C.
2016-01-01
ABSTRACT Titanium dioxide (TiO2) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CFexp) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min–1; DF, 0.4 ml min–1; Ft, 5 min; and CFexp, 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated. PMID:27650879
Mekuto, Lukhanyo; Ntwampe, Seteno Karabo Obed; Jackson, Vanessa Angela
2015-07-01
A mesophilic alkali-tolerant bacterial consortium belonging to the Bacillus genus was evaluated for its ability to biodegrade high free cyanide (CN(-)) concentration (up to 500 mg CN(-)/L), subsequent to the oxidation of the formed ammonium and nitrates in a continuous bioreactor system solely supplemented with whey waste. Furthermore, an optimisation study for successful cyanide biodegradation by this consortium was evaluated in batch bioreactors (BBs) using response surface methodology (RSM). The input variables, that is, pH, temperature and whey-waste concentration, were optimised using a numerical optimisation technique where the optimum conditions were found to be as follows: pH 9.88, temperature 33.60 °C and whey-waste concentration of 14.27 g/L, under which 206.53 mg CN(-)/L in 96 h can be biodegraded by the microbial species from an initial cyanide concentration of 500 mg CN(-)/L. Furthermore, using the optimised data, cyanide biodegradation in a continuous mode was evaluated in a dual-stage packed-bed bioreactor (PBB) connected in series to a pneumatic bioreactor system (PBS) used for simultaneous nitrification, including aerobic denitrification. The whey-supported Bacillus sp. culture was not inhibited by the free cyanide concentration of up to 500 mg CN(-)/L, with an overall degradation efficiency of ≥ 99 % with subsequent nitrification and aerobic denitrification of the formed ammonium and nitrates over a period of 80 days. This is the first study to report free cyanide biodegradation at concentrations of up to 500 mg CN(-)/L in a continuous system using whey waste as a microbial feedstock. The results showed that the process has the potential for the bioremediation of cyanide-containing wastewaters.
Leucht, Stefan; Winter-van Rossum, Inge; Heres, Stephan; Arango, Celso; Fleischhacker, W Wolfgang; Glenthøj, Birte; Leboyer, Marion; Leweke, F Markus; Lewis, Shôn; McGuire, Phillip; Meyer-Lindenberg, Andreas; Rujescu, Dan; Kapur, Shitij; Kahn, René S; Sommer, Iris E
2015-05-01
Most of the 13 542 trials contained in the Cochrane Schizophrenia Group's register just tested the general efficacy of pharmacological or psychosocial interventions. Studies on the subsequent treatment steps, which are essential to guide clinicians, are largely missing. This knowledge gap leaves important questions unanswered. For example, when a first antipsychotic failed, is switching to another drug effective? And when should we use clozapine? The aim of this article is to review the efficacy of switching antipsychotics in case of nonresponse. We also present the European Commission sponsored "Optimization of Treatment and Management of Schizophrenia in Europe" (OPTiMiSE) trial which aims to provide a treatment algorithm for patients with a first episode of schizophrenia. We searched Pubmed (October 29, 2014) for randomized controlled trials (RCTs) that examined switching the drug in nonresponders to another antipsychotic. We described important methodological choices of the OPTiMiSE trial. We found 10 RCTs on switching antipsychotic drugs. No trial was conclusive and none was concerned with first-episode schizophrenia. In OPTiMiSE, 500 first episode patients are treated with amisulpride for 4 weeks, followed by a 6-week double-blind RCT comparing continuation of amisulpride with switching to olanzapine and ultimately a 12-week clozapine treatment in nonremitters. A subsequent 1-year RCT validates psychosocial interventions to enhance adherence. Current literature fails to provide basic guidance for the pharmacological treatment of schizophrenia. The OPTiMiSE trial is expected to provide a basis for clinical guidelines to treat patients with a first episode of schizophrenia. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
NASA Astrophysics Data System (ADS)
Fu, Shihua; Li, Haitao; Zhao, Guodong
2018-05-01
This paper investigates the evolutionary dynamic and strategy optimisation for a kind of networked evolutionary games whose strategy updating rules incorporate 'bankruptcy' mechanism, and the situation that each player's bankruptcy is due to the previous continuous low profits gaining from the game is considered. First, by using semi-tensor product of matrices method, the evolutionary dynamic of this kind of games is expressed as a higher order logical dynamic system and then converted into its algebraic form, based on which, the evolutionary dynamic of the given games can be discussed. Second, the strategy optimisation problem is investigated, and some free-type control sequences are designed to maximise the total payoff of the whole game. Finally, an illustrative example is given to show that our new results are very effective.
NASA Astrophysics Data System (ADS)
Grimminck, Dennis L. A. G.; Vasa, Suresh K.; Meerts, W. Leo; Kentgens, P. M.
2011-06-01
A global optimisation scheme for phase modulated proton homonuclear decoupling sequences in solid-state NMR is presented. Phase modulations, parameterised by DUMBO Fourier coefficients, were optimized using a Covariance Matrix Adaptation Evolution Strategies algorithm. Our method, denoted EASY-GOING homonuclear decoupling, starts with featureless spectra and optimises proton-proton decoupling, during either proton or carbon signal detection. On the one hand, our solutions closely resemble (e)DUMBO for moderate sample spinning frequencies and medium radio-frequency (rf) field strengths. On the other hand, the EASY-GOING approach resulted in a superior solution, achieving significantly better resolved proton spectra at very high 680 kHz rf field strength. N. Hansen, and A. Ostermeier. Evol. Comput. 9 (2001) 159-195 B. Elena, G. de Paepe, L. Emsley. Chem. Phys. Lett. 398 (2004) 532-538
Bright-White Beetle Scales Optimise Multiple Scattering of Light
NASA Astrophysics Data System (ADS)
Burresi, Matteo; Cortese, Lorenzo; Pattelli, Lorenzo; Kolle, Mathias; Vukusic, Peter; Wiersma, Diederik S.; Steiner, Ullrich; Vignolini, Silvia
2014-08-01
Whiteness arises from diffuse and broadband reflection of light typically achieved through optical scattering in randomly structured media. In contrast to structural colour due to coherent scattering, white appearance generally requires a relatively thick system comprising randomly positioned high refractive-index scattering centres. Here, we show that the exceptionally bright white appearance of Cyphochilus and Lepidiota stigma beetles arises from a remarkably optimised anisotropy of intra-scale chitin networks, which act as a dense scattering media. Using time-resolved measurements, we show that light propagating in the scales of the beetles undergoes pronounced multiple scattering that is associated with the lowest transport mean free path reported to date for low-refractive-index systems. Our light transport investigation unveil high level of optimisation that achieves high-brightness white in a thin low-mass-per-unit-area anisotropic disordered nanostructure.
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
Xiaoli Sun; Wengang Li; Jian Li; Yuangang Zu; Chung-Yun Hse; Jiulong Xie; Xiuhua Zhao
2016-01-01
Ethanol and hexane mixture agent microwave-assisted extraction (MAE) method was conducted to extract peony (Paeonia suffruticosa Andr.) seed oil (PSO). The aim of the study was to optimise the extraction for both yield and energy consumption in mixture agent MAE. The highest oil yield (34.49%) and lowest unit energy consumption (14 125.4 J g -1)...
Optimizing Operational Physical Fitness (Optimisation de L’Aptitude Physique Operationnelle)
2009-01-01
NORTH ATLANTIC TREATY ORGANISATION RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR... RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR-HFM-080 Optimizing Operational Physical...Fitness (Optimisation de l’aptitude physique opérationnelle) Final Report of Task Group 019. ii RTO-TR-HFM-080 The Research and
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
De Gussem, K; Wambecq, T; Roels, J; Fenu, A; De Gueldre, G; Van De Steene, B
2011-01-01
An ASM2da model of the full-scale waste water plant of Bree (Belgium) has been made. It showed very good correlation with reference operational data. This basic model has been extended to include an accurate calculation of environmental footprint and operational costs (energy consumption, dosing of chemicals and sludge treatment). Two optimisation strategies were compared: lowest cost meeting the effluent consent versus lowest environmental footprint. Six optimisation scenarios have been studied, namely (i) implementation of an online control system based on ammonium and nitrate sensors, (ii) implementation of a control on MLSS concentration, (iii) evaluation of internal recirculation flow, (iv) oxygen set point, (v) installation of mixing in the aeration tank, and (vi) evaluation of nitrate setpoint for post denitrification. Both an environmental impact or Life Cycle Assessment (LCA) based approach for optimisation are able to significantly lower the cost and environmental footprint. However, the LCA approach has some advantages over cost minimisation of an existing full-scale plant. LCA tends to chose control settings that are more logic: it results in a safer operation of the plant with less risks regarding the consents. It results in a better effluent at a slightly increased cost.
Rother, E; Cornel, P
2004-01-01
The Biofiltration process in wastewater treatment combines filtration and biological processes in one reactor. In Europe it is meanwhile an accepted technology in advanced wastewater treatment, whenever space is scarce and a virtually suspended solids-free effluent is demanded. Although more than 500 plants are in operation world-wide there is still a lack of published operational experiences to help planners and operators to identify potentials for optimisation, e.g. energy consumption or the vulnerability against peakloads. Examples from pilot trials are given how the nitrification and denitrification can be optimised. Nitrification can be quickly increased by adjusting DO content of the water. Furthermore carrier materials like zeolites can store surplus ammonia during peak loads and release afterwards. Pre-denitrification in biofilters is normally limited by the amount of easily degradable organic substrate, resulting in relatively high requirements for external carbon. The combination of pre-DN, N and post-DN filters is much more advisable for most municipal wastewaters, because the recycle rate can be reduced and external carbon can be saved. Exemplarily it is shown for a full scale preanoxic-DN/N/postanoxic-DN plant of 130,000 p.e. how 15% energy could be saved by optimising internal recycling and some control strategies.
Optimised in vitro applicable loads for the simulation of lateral bending in the lumbar spine.
Dreischarf, Marcel; Rohlmann, Antonius; Bergmann, Georg; Zander, Thomas
2012-07-01
In in vitro studies of the lumbar spine simplified loading modes (compressive follower force, pure moment) are usually employed to simulate the standard load cases flexion-extension, axial rotation and lateral bending of the upper body. However, the magnitudes of these loads vary widely in the literature. Thus the results of current studies may lead to unrealistic values and are hardly comparable. It is still unknown which load magnitudes lead to a realistic simulation of maximum lateral bending. A validated finite element model of the lumbar spine was used in an optimisation study to determine which magnitudes of the compressive follower force and bending moment deliver results that fit best with averaged in vivo data. The best agreement with averaged in vivo measured data was found for a compressive follower force of 700 N and a lateral bending moment of 7.8 Nm. These results show that loading modes that differ strongly from the optimised one may not realistically simulate maximum lateral bending. The simplified but in vitro applicable loading cannot perfectly mimic the in vivo situation. However, the optimised magnitudes are those which agree best with averaged in vivo measured data. Its consequent application would lead to a better comparability of different investigations. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Electroconvulsive therapy stimulus titration: Not all it seems.
Rosenman, Stephen J
2018-05-01
To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.
Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil
2013-01-01
In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L(-1) starch, 30 g L(-1) soya bean and 9 g L(-1) sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis
Waterfall, Christy M.; Cobb, Benjamin D.
2001-01-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a ‘matrix-based’ optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable. PMID:11726702
Single tube genotyping of sickle cell anaemia using PCR-based SNP analysis.
Waterfall, C M; Cobb, B D
2001-12-01
Allele-specific amplification (ASA) is a generally applicable technique for the detection of known single nucleotide polymorphisms (SNPs), deletions, insertions and other sequence variations. Conventionally, two reactions are required to determine the zygosity of DNA in a two-allele system, along with significant upstream optimisation to define the specific test conditions. Here, we combine single tube bi-directional ASA with a 'matrix-based' optimisation strategy, speeding up the whole process in a reduced reaction set. We use sickle cell anaemia as our model SNP system, a genetic disease that is currently screened using ASA methods. Discriminatory conditions were rapidly optimised enabling the unambiguous identification of DNA from homozygous sickle cell patients (HbS/S), heterozygous carriers (HbA/S) or normal DNA in a single tube. Simple downstream mathematical analyses based on product yield across the optimisation set allow an insight into the important aspects of priming competition and component interactions in this competitive PCR. This strategy can be applied to any polymorphism, defining specific conditions using a multifactorial approach. The inherent simplicity and low cost of this PCR-based method validates bi-directional ASA as an effective tool in future clinical screening and pharmacogenomic research where more expensive fluorescence-based approaches may not be desirable.
Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.
Kramar, A; Turk, S; Vrecer, F
2003-04-30
The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.
Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil
2013-01-01
In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L−1 starch, 30 g L−1 soya bean and 9 g L−1 sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view. PMID:24516462
Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian
2009-06-26
This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Gnirss, R; Lesjean, B; Adam, C; Buisson, H
2003-01-01
Future stringent phosphorus regulations (down to 50 microg/L in some cases) together with the availability of more cost effective and/or innovative membrane processes, are the bases for this project. In contrast to conventional activated sludge plants, process parameters are not optimised and especially enhanced biological phosphorus (Bio-P) removal in membrane bioreactors (MBRs) are not proven yet. Current practice of P-removal in MBRs is the addition of coagulants in a co-precipitation mode. Enhanced biological phosphorus removal, when adapted to MBR technology, might be a cost-effective process. For very stringent effluent criteria additional P-adsorption on activated clay after membrane filtration can be also an interesting solution. The objective of this research project is to identify and test various phosphorus removal processes or process combinations, including MBR technologies. This should enable us to establish efficient and cost effective P-removal strategies for upgrading small sewage treatment units (up to 10,000 PE), as needed in some decentralised areas of Berlin. In particular, enhanced Bio-P removal technology was developed and optimised in MBR. Combinations of co-precipitation and post-adsorption will be tested when low P-values down to 50 microg/L are required in the effluent. One MBR bench-scale plant of 200 to 250 L and two MBR pilot plants of 1 to 3 m3 each were operated in parallel to a conventional wastewater treatment plant (Ruhleben WWTP, Berlin, Germany). The MBR bench-scale and pilot plants were operated under sludge ages of respectively 15 and 25 days. In both cases, Bio-P was possible, and phosphorus effluent concentration of about 0.1 mg/L could be achieved. A similar effluent quality was observed with the conventional WWTP. Investigations with lab columns indicated that P-adsorption could lead to concentrations down to 50 microg/L and no particle accumulation occurred in the filter media. The three tested materials exhibited great differences in break-through curves. Granulated ferric hydroxyde (GEH) showed higher capacity than activated alumina and FerroSorpPlus.
Praud, Anne; Champion, Jean-Luc; Corde, Yannick; Drapeau, Antoine; Meyer, Laurence; Garin-Bastuji, Bruno
2012-07-09
Brucella ovis causes an infectious disease responsible for infertility and subsequent economic losses in sheep production. The standard serological test to detect B. ovis infection in rams is the complement fixation test (CFT), which has imperfect sensitivity and specificity in addition to technical drawbacks. Other available tests include the indirect enzyme-linked immunosorbent assays (I-ELISA) but no I-ELISA kit has been fully evaluated.The study aimed to compare an I-ELISA kit and the standard CFT. Our study was carried out on serum samples from 4599 rams from the South of France where the disease is enzootic. A Bayesian approach was used to estimate tests characteristics (diagnostic sensitivity, Se and diagnostic specificity, Sp). The tests were then studied together in order to optimise testing strategies to detect B. ovis. After optimising the cut-off values in order to avoid doubtful results without deteriorating the concordance between the results of the two tests, the I-ELISA appeared to be slightly more sensitive than CFT (Se I-ELISA=0.917 [0.822; 0.992], 95% Credibility Interval (CrI) compared to Se CFT=0.860 [0.740; 0.967], 95% CrI). However, CFT was slightly more specific than I-ELISA (Sp CFT=0.988 [0.947; 1.0], 95% CrI) compared to Sp I-ELISA =0.952 [0.901; 1.0], 95% CrI).The tests were then associated with two different interpretation schemes. The series association increased the specificity of screening and could be used for pre-movement testing in rams from uninfected flocks. The parallel association increased sequence sensitivity, thus appearing more suitable for eradicating the disease in infected flocks. The high sensitivity and acceptable specificity of this I-ELISA kit support its potential interest to avoid the limitations of CFT. The two tests could also be used together or combined with other diagnostic methods such as semen culture to improve the testing strategy. The choice of test sequence and interpretation criteria depends on the epidemiological context, screening objectives and the financial and practical constraints.
NASA Astrophysics Data System (ADS)
Pouli, Paraskevi; Oujja, Mohamed; Castillejo, Marta
2012-02-01
In the last twenty years lasers have acquired an important role in the study and the preservation of Cultural Heritage (CH) objects and Monuments, as they have effectively illuminated a number of complex diagnostic and restoration problems. Their unique properties have enabled their use in a wide range of conservation applications, since they ensure interventions with precise control, material selectivity and immediate feedback. Surface cleaning, based on laser ablation, is a delicate, critical and irreversible process, which, given the multitude of materials that may be present on a CH object and the often fragile or precarious condition of the original surfaces, is fraught with many potential complications. Therefore it is crucial to choose the best possible laser cleaning methodology for each individual case, which involves optimising the laser parameters according to material properties, as well as the thorough knowledge of the ablation mechanisms involved. In this context the systematic investigation and elucidation of potential damage or side effects occurring upon cleaning is essential, as it delineates the possibilities and limitations of laser ablation and allows the fine-tuning of the operating parameters for a successful cleaning intervention. This paper is an overview of studies investigating the mechanisms which are responsible for the laser-induced discoloration effects. Emphasis is given on the yellowing coloration observed on stonework upon infrared (IR) ablation of pollution encrustations, while the various theories introduced to approach the different physical and/or chemical processes and mechanisms responsible for such side effects are discussed. In this respect the different laser cleaning methodologies, which are based on the use of laser systems with different pulse durations and wavelength characteristics, introduced in order to rectify or prevent discoloration on stonework are presented. In parallel, the darkening phenomena which occur upon laser irradiation of painted surfaces are also considered. Studies on series of model paints performed in order to understand the sensitivity of pigments to laser irradiation are critically reviewed. In this respect the importance of the optimal wavelength and pulse-duration selection for a safe and controlled laser cleaning intervention is also addressed.
Photonic simulation of entanglement growth and engineering after a spin chain quench.
Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio
2017-11-17
The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.
A support vector machine approach for classification of welding defects from ultrasonic signals
NASA Astrophysics Data System (ADS)
Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming
2014-07-01
Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.
Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.
Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
Pre-operative optimisation of lung function
Azhar, Naheed
2015-01-01
The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function. PMID:26556913
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
Tesfaye, Tamrat; Sithole, Bruce; Ramjugernath, Deresh; Ndlela, Luyanda
2018-02-01
Commercially processed, untreated chicken feathers are biologically hazardous due to the presence of blood-borne pathogens. Prior to valorisation, it is crucial that they are decontaminated to remove the microbial contamination. The present study focuses on evaluating the best technologies to decontaminate and pre-treat chicken feathers in order to make them suitable for valorisation. Waste chicken feathers were washed with three surfactants (sodium dodecyl sulphate) dimethyl dioctadecyl ammonium chloride, and polyoxyethylene (40) stearate) using statistically designed experiments. Process conditions were optimised using response surface methodology with a Box-Behnken experimental design. The data were compared with decontamination using an autoclave. Under optimised conditions, the microbial counts of the decontaminated and pre-treated chicken feathers were significantly reduced making them safe for handling and use for valorisation applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biomass supply chain optimisation for Organosolv-based biorefineries.
Giarola, Sara; Patel, Mayank; Shah, Nilay
2014-05-01
This work aims at providing a Mixed Integer Linear Programming modelling framework to help define planning strategies for the development of sustainable biorefineries. The up-scaling of an Organosolv biorefinery was addressed via optimisation of the whole system economics. Three real world case studies were addressed to show the high-level flexibility and wide applicability of the tool to model different biomass typologies (i.e. forest fellings, cereal residues and energy crops) and supply strategies. Model outcomes have revealed how supply chain optimisation techniques could help shed light on the development of sustainable biorefineries. Feedstock quality, quantity, temporal and geographical availability are crucial to determine biorefinery location and the cost-efficient way to supply the feedstock to the plant. Storage costs are relevant for biorefineries based on cereal stubble, while wood supply chains present dominant pretreatment operations costs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Roadmap to the multidisciplinary design analysis and optimisation of wind energy systems
Perez-Moreno, S. Sanchez; Zaaijer, M. B.; Bottasso, C. L.; ...
2016-10-03
Here, a research agenda is described to further encourage the application of Multidisciplinary Design Analysis and Optimisation (MDAO) methodologies to wind energy systems. As a group of researchers closely collaborating within the International Energy Agency (IEA) Wind Task 37 for Wind Energy Systems Engineering: Integrated Research, Design and Development, we have identified challenges that will be encountered by users building an MDAO framework. This roadmap comprises 17 research questions and activities recognised to belong to three research directions: model fidelity, system scope and workflow architecture. It is foreseen that sensible answers to all these questions will enable to more easilymore » apply MDAO in the wind energy domain. Beyond the agenda, this work also promotes the use of systems engineering to design, analyse and optimise wind turbines and wind farms, to complement existing compartmentalised research and design paradigms.« less
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
On the optimisation of the use of 3He in radiation portal monitors
NASA Astrophysics Data System (ADS)
Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet
2013-02-01
Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
NASA Astrophysics Data System (ADS)
Dal Bianco, N.; Lot, R.; Matthys, K.
2018-01-01
This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R
2010-08-31
Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.
Dwell time-based stabilisation of switched delay systems using free-weighting matrices
NASA Astrophysics Data System (ADS)
Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay
2018-01-01
In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.
NASA Astrophysics Data System (ADS)
Massioni, Paolo; Massari, Mauro
2018-05-01
This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
De Greef, J; Villani, K; Goethals, J; Van Belle, H; Van Caneghem, J; Vandecasteele, C
2013-11-01
Due to ongoing developments in the EU waste policy, Waste-to-Energy (WtE) plants are to be optimized beyond current acceptance levels. In this paper, a non-exhaustive overview of advanced technical improvements is presented and illustrated with facts and figures from state-of-the-art combustion plants for municipal solid waste (MSW). Some of the data included originate from regular WtE plant operation - before and after optimisation - as well as from defined plant-scale research. Aspects of energy efficiency and (re-)use of chemicals, resources and materials are discussed and support, in light of best available techniques (BAT), the idea that WtE plant performance still can be improved significantly, without direct need for expensive techniques, tools or re-design. In first instance, diagnostic skills and a thorough understanding of processes and operations allow for reclaiming the silent optimisation potential. Copyright © 2013 Elsevier Ltd. All rights reserved.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.
2016-12-01
The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.
The development of response surface pathway design to reduce animal numbers in toxicity studies
2014-01-01
Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Sweetapple, Christine; Fu, Guangtao; Butler, David
2014-05-15
This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
O'Brien, Rosaleen; Fitzpatrick, Bridie; Higgins, Maria; Guthrie, Bruce; Watt, Graham; Wyke, Sally
2016-01-01
Objectives To develop and optimise a primary care-based complex intervention (CARE Plus) to enhance the quality of life of patients with multimorbidity in the deprived areas. Methods Six co-design discussion groups involving 32 participants were held separately with multimorbid patients from the deprived areas, voluntary organisations, general practitioners and practice nurses working in the deprived areas. This was followed by piloting in two practices and further optimisation based on interviews with 11 general practitioners, 2 practice nurses and 6 participating multimorbid patients. Results Participants endorsed the need for longer consultations, relational continuity and a holistic approach. All felt that training and support of the health care staff was important. Most participants welcomed the idea of additional self-management support, though some practitioners were dubious about whether patients would use it. The pilot study led to changes including a revised care plan, the inclusion of mindfulness-based stress reduction techniques in the support of practitioners and patients, and the stream-lining of the written self-management support material for patients. Discussion We have co-designed and optimised an augmented primary care intervention involving a whole-system approach to enhance quality of life in multimorbid patients living in the deprived areas. CARE Plus will next be tested in a phase 2 cluster randomised controlled trial. PMID:27068113
Badham, George E; Dos Santos, Scott J; Lloyd, Lucinda Ba; Holdstock, Judy M; Whiteley, Mark S
2018-06-01
Background In previous in vitro and ex vivo studies, we have shown increased thermal spread can be achieved with radiofrequency-induced thermotherapy when using a low power and slower, discontinuous pullback. We aimed to determine the clinical success rate of radiofrequency-induced thermotherapy using this optimised protocol for the treatment of superficial venous reflux in truncal veins. Methods Sixty-three patients were treated with radiofrequency-induced thermotherapy using the optimised protocol and were followed up after one year (mean 16.3 months). Thirty-five patients returned for audit, giving a response rate of 56%. Duplex ultrasonography was employed to check for truncal reflux and compared to initial scans. Results In the 35 patients studied, there were 48 legs, with 64 truncal veins treated by radiofrequency-induced thermotherapy (34 great saphenous, 15 small saphenous and 15 anterior accessory saphenous veins). One year post-treatment, complete closure of all previously refluxing truncal veins was demonstrated on ultrasound, giving a success rate of 100%. Conclusions Using a previously reported optimised, low power/slow pullback radiofrequency-induced thermotherapy protocol, we have shown it is possible to achieve a 100% ablation at one year. This compares favourably with results reported at one year post-procedure using the high power/fast pullback protocols that are currently recommended for this device.
Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda
2013-08-02
We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-01-01
Purpose: The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Materials and Methods: Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 23 full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Results: Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug–excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. Conclusion: It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease. PMID:23580933
Formulation and optimisation of raft-forming chewable tablets containing H2 antagonist.
Prajapati, Shailesh T; Mehta, Anant P; Modhia, Ishan P; Patel, Chhagan N
2012-10-01
The purpose of this research work was to formulate raft-forming chewable tablets of H2 antagonist (Famotidine) using a raft-forming agent along with an antacid- and gas-generating agent. Tablets were prepared by wet granulation and evaluated for raft strength, acid neutralisation capacity, weight variation, % drug content, thickness, hardness, friability and in vitro drug release. Various raft-forming agents were used in preliminary screening. A 2(3) full-factorial design was used in the present study for optimisation. The amount of sodium alginate, amount of calcium carbonate and amount sodium bicarbonate were selected as independent variables. Raft strength, acid neutralisation capacity and drug release at 30 min were selected as responses. Tablets containing sodium alginate were having maximum raft strength as compared with other raft-forming agents. Acid neutralisation capacity and in vitro drug release of all factorial batches were found to be satisfactory. The F5 batch was optimised based on maximum raft strength and good acid neutralisation capacity. Drug-excipient compatibility study showed no interaction between the drug and excipients. Stability study of the optimised formulation showed that the tablets were stable at accelerated environmental conditions. It was concluded that raft-forming chewable tablets prepared using an optimum amount of sodium alginate, calcium carbonate and sodium bicarbonate could be an efficient dosage form in the treatment of gastro oesophageal reflux disease.
Zipfel, Stephan; Wild, Beate; Groß, Gaby; Friederich, Hans-Christoph; Teufel, Martin; Schellberg, Dieter; Giel, Katrin E; de Zwaan, Martina; Dinkel, Andreas; Herpertz, Stephan; Burgmer, Markus; Löwe, Bernd; Tagay, Sefik; von Wietersheim, Jörn; Zeeck, Almut; Schade-Brittinger, Carmen; Schauenburg, Henning; Herzog, Wolfgang
2014-01-11
Psychotherapy is the treatment of choice for patients with anorexia nervosa, although evidence of efficacy is weak. The Anorexia Nervosa Treatment of OutPatients (ANTOP) study aimed to assess the efficacy and safety of two manual-based outpatient treatments for anorexia nervosa--focal psychodynamic therapy and enhanced cognitive behaviour therapy--versus optimised treatment as usual. The ANTOP study is a multicentre, randomised controlled efficacy trial in adults with anorexia nervosa. We recruited patients from ten university hospitals in Germany. Participants were randomly allocated to 10 months of treatment with either focal psychodynamic therapy, enhanced cognitive behaviour therapy, or optimised treatment as usual (including outpatient psychotherapy and structured care from a family doctor). The primary outcome was weight gain, measured as increased body-mass index (BMI) at the end of treatment. A key secondary outcome was rate of recovery (based on a combination of weight gain and eating disorder-specific psychopathology). Analysis was by intention to treat. This trial is registered at http://isrctn.org, number ISRCTN72809357. Of 727 adults screened for inclusion, 242 underwent randomisation: 80 to focal psychodynamic therapy, 80 to enhanced cognitive behaviour therapy, and 82 to optimised treatment as usual. At the end of treatment, 54 patients (22%) were lost to follow-up, and at 12-month follow-up a total of 73 (30%) had dropped out. At the end of treatment, BMI had increased in all study groups (focal psychodynamic therapy 0·73 kg/m(2), enhanced cognitive behaviour therapy 0·93 kg/m(2), optimised treatment as usual 0·69 kg/m(2)); no differences were noted between groups (mean difference between focal psychodynamic therapy and enhanced cognitive behaviour therapy -0·45, 95% CI -0·96 to 0·07; focal psychodynamic therapy vs optimised treatment as usual -0·14, -0·68 to 0·39; enhanced cognitive behaviour therapy vs optimised treatment as usual -0·30, -0·22 to 0·83). At 12-month follow-up, the mean gain in BMI had risen further (1·64 kg/m(2), 1·30 kg/m(2), and 1·22 kg/m(2), respectively), but no differences between groups were recorded (0·10, -0·56 to 0·76; 0·25, -0·45 to 0·95; 0·15, -0·54 to 0·83, respectively). No serious adverse events attributable to weight loss or trial participation were recorded. Optimised treatment as usual, combining psychotherapy and structured care from a family doctor, should be regarded as solid baseline treatment for adult outpatients with anorexia nervosa. Focal psychodynamic therapy proved advantageous in terms of recovery at 12-month follow-up, and enhanced cognitive behaviour therapy was more effective with respect to speed of weight gain and improvements in eating disorder psychopathology. Long-term outcome data will be helpful to further adapt and improve these novel manual-based treatment approaches. German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), German Eating Disorders Diagnostic and Treatment Network (EDNET). Copyright © 2014 Elsevier Ltd. All rights reserved.
Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-01-01
Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284
Maillot, Matthieu; Vieux, Florent; Delaere, Fabien; Lluch, Anne; Darmon, Nicole
2017-01-01
To explore the dietary changes needed to achieve nutritional adequacy across income levels at constant energy and diet cost. Individual diet modelling was used to design iso-caloric, nutritionally adequate optimised diets for each observed diet in a sample of adult normo-reporters aged ≥20 years (n = 1,719) from the Individual and National Dietary Survey (INCA2), 2006-2007. Diet cost was estimated from mean national food prices (2006-2007). A first set of free-cost models explored the impact of optimisation on the variation of diet cost. A second set of iso-cost models explored the dietary changes induced by the optimisation with cost set equal to the observed one. Analyses of dietary changes were conducted by income quintiles, adjusting for energy intake, sociodemographic and socioeconomic variables, and smoking status. The cost of observed diets increased with increasing income quintiles. In free-cost models, the optimisation increased diet cost on average (+0.22 ± 1.03 euros/d) and within each income quintile, with no significant difference between quintiles, but with systematic increases for observed costs lower than 3.85 euros/d. In iso-cost models, it was possible to design nutritionally adequate diets whatever the initial observed cost. On average, the optimisation at iso-cost increased fruits and vegetables (+171 g/day), starchy foods (+121 g/d), water and beverages (+91 g/d), and dairy products (+20 g/d), and decreased the other food groups (e.g. mixed dishes and salted snacks), leading to increased total diet weight (+300 g/d). Those changes were mostly similar across income quintiles, but lower-income individuals needed to introduce significantly more fruit and vegetables than higher-income ones. In France, the dietary changes needed to reach nutritional adequacy without increasing cost are similar regardless of income, but may be more difficult to implement when the budget for food is lower than 3.85 euros/d.
Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank
2013-12-01
Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.
Dynamic response of a riser under excitation of internal waves
NASA Astrophysics Data System (ADS)
Lou, Min; Yu, Chenglong; Chen, Peng
2015-12-01
In this paper, the dynamic response of a marine riser under excitation of internal waves is studied. With the linear approximation, the governing equation of internal waves is given. Based on the rigid-lid boundary condition assumption, the equation is solved by Thompson-Haskell method. Thus the velocity field of internal waves is obtained by the continuity equation. Combined with the modified Morison formula, using finite element method, the motion equation of riser is solved in time domain with Newmark-β method. The computation programs are compiled to solve the differential equations in time domain. Then we get the numerical results, including riser displacement and transfiguration. It is observed that the internal wave will result in circular shear flow, and the first two modes have a dominant effect on dynamic response of the marine riser. In the high mode, the response diminishes rapidly. In different modes of internal waves, the deformation of riser has different shapes, and the location of maximum displacement shifts. Studies on wave parameters indicate that the wave amplitude plays a considerable role in response displacement of riser, while the wave frequency contributes little. Nevertheless, the internal waves of high wave frequency will lead to a high-frequency oscillation of riser; it possibly gives rise to fatigue crack extension and partial fatigue failure.
A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake
NASA Astrophysics Data System (ADS)
Ruppert, Stanley D.; Yomogida, Kiyoshi
1992-09-01
Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.
Theoretical background of retrieving Green's function by cross-correlation: one-dimensional case
NASA Astrophysics Data System (ADS)
Nakahara, Hisashi
2006-06-01
Recently, an assertion has been verified experimentally and theoretically that Green's function between two receivers can be reproduced by cross-correlating the records at the receivers. In this paper, we have theoretically proved the assertion for 1-D media with the free surface by using the Thomson-Haskell matrix method. Strictly speaking, one side of the cross-correlation between records at two receivers is the convolution between Green's function and the autocorrelation function of the source wavelet. This study extends the geometry considered by Claerbout to two receivers vertically apart, and is a special case of the proof by Wapenaar et al. which dealt with 3-D arbitrary inhomogeneous media. However, a simple geometry in 1-D problems enables us to make the proof without any approximations and to better understand the physical background with more ease. That is the main advantage of this study. Though a 1-D geometry seems far from reality, it may be sufficient if an appropriate combination of receivers and earthquakes is selected. In fact, such a geometry is often seen in seismological observations by a vertical array of seismographs in the shallow subsurface. Therefore, we refer to a possibility that the proof in this paper is applied to the estimation of site amplification factors by using records of a vertical seismographic array.
Measuring the distance between multiple sequence alignments.
Blackburne, Benjamin P; Whelan, Simon
2012-02-15
Multiple sequence alignment (MSA) is a core method in bioinformatics. The accuracy of such alignments may influence the success of downstream analyses such as phylogenetic inference, protein structure prediction, and functional prediction. The importance of MSA has lead to the proliferation of MSA methods, with different objective functions and heuristics to search for the optimal MSA. Different methods of inferring MSAs produce different results in all but the most trivial cases. By measuring the differences between inferred alignments, we may be able to develop an understanding of how these differences (i) relate to the objective functions and heuristics used in MSA methods, and (ii) affect downstream analyses. We introduce four metrics to compare MSAs, which include the position in a sequence where a gap occurs or the location on a phylogenetic tree where an insertion or deletion (indel) event occurs. We use both real and synthetic data to explore the information given by these metrics and demonstrate how the different metrics in combination can yield more information about MSA methods and the differences between them. MetAl is a free software implementation of these metrics in Haskell. Source and binaries for Windows, Linux and Mac OS X are available from http://kumiho.smith.man.ac.uk/whelan/software/metal/.
NASA Astrophysics Data System (ADS)
Ragan-Kelley, M.; Perez, F.; Granger, B.; Kluyver, T.; Ivanov, P.; Frederic, J.; Bussonnier, M.
2014-12-01
IPython has provided terminal-based tools for interactive computing in Python since 2001. The notebook document format and multi-process architecture introduced in 2011 have expanded the applicable scope of IPython into teaching, presenting, and sharing computational work, in addition to interactive exploration. The new architecture also allows users to work in any language, with implementations in Python, R, Julia, Haskell, and several other languages. The language agnostic parts of IPython have been renamed to Jupyter, to better capture the notion that a cross-language design can encapsulate commonalities present in computational research regardless of the programming language being used. This architecture offers components like the web-based Notebook interface, that supports rich documents that combine code and computational results with text narratives, mathematics, images, video and any media that a modern browser can display. This interface can be used not only in research, but also for publication and education, as notebooks can be converted to a variety of output formats, including HTML and PDF. Recent developments in the Jupyter project include a multi-user environment for hosting notebooks for a class or research group, a live collaboration notebook via Google Docs, and better support for languages other than Python.
Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models
NASA Astrophysics Data System (ADS)
Shen, C.; Xia, J.; Mi, B.
2016-12-01
A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.
Wiche, Gregg J.; Lent, Robert M.; Rannie, W. F.
1996-01-01
On the basis of three sediment-based chronologies, Fritz et al. ( 1994) concluded that during the ’Little Ice Age’ (about AD 1500 to 1850), the Devils Lake Basin generally had less effective moisture (precipitation minus evaporation) and warmer temperatures than at present. In this comment, we argue that historic data indicate that runoff and effective moisture were greater than at present. The largest nineteenth-century floods (AD 1826, 1852 and 1861) were significantly greater than the twentiethcentury floods, and flooding in the Red River of the North Basin occurred more frequently from AD 1800 to 1870 than since 1870. Between AD 1776 and 1870, the ratio of wet to dry years was about 2 to 1. Mean temperatures in all seasons were cooler for 1850-70 than for 1931-60. Lake levels of Devils Lake during the first half of the nineteenth century were higher than they are today, and, even when Devils Lake was almost dry, the salinity was less than the ’diatom-inferred’ salinity values that Fritz et al. (1994) estimated for 1800 through about 1850. We acknowledge the importance of high-resolution palaeoclimatic records, but interpretation of these records must be consistent with historic information.
Site-effect estimations for Taipei Basin based on shallow S-wave velocity structures
NASA Astrophysics Data System (ADS)
Chen, Ying-Chi; Huang, Huey-Chu; Wu, Cheng-Feng
2016-03-01
Shallow S-wave velocities have been widely used for earthquake ground-motion site characterization. Thus, the S-wave velocity structures of Taipei Basin, Taiwan were investigated using array records of microtremors at 15 sites (Huang et al., 2015). In this study, seven velocity structures are added to the database describing Taipei Basin. Validity of S-wave velocity structures are first examined using the 1D Haskell method and well-logging data at the Wuku Sewage Disposal Plant (WK) borehole site. Basically, the synthetic results match well with the observed data at different depths. Based on S-wave velocity structures at 22 sites, theoretical transfer functions at five different formations of the sedimentary basin are calculated. According to these results, predominant frequencies for these formations are estimated. If the S-wave velocity of the Tertiary basement is assumed to be 1000 m/s, the predominant frequencies of the Quaternary sediments are between 0.3 Hz (WUK) and 1.4 Hz (LEL) in Taipei Basin while the depths of sediments between 0 m (i.e. at the edge of the basin) and 616 m (i.e. site WUK) gradually increase from southeast to northwest. Our results show good agreement with available geological and geophysical information.
Barrett, Dominic A.; Leslie, David M.
2010-01-01
In 1984 and 1985, the Oklahoma Department of Wildlife Conservation reintroduced North American river otters (Lontra canadensis) from coastal Louisiana into eastern Oklahoma. Those reintroductions and immigration from Arkansas and possibly northeastern Texas allowed river otters to become reestablished in eastern Oklahoma. Our goals were to determine the contemporary distribution of river otters in central and eastern Oklahoma with voucher specimens, sign surveys, and mail surveys and to compare proportion of positive detections among watersheds. We report new distributional records with voucher specimens from seven counties (Adair, Bryan, Coal, Johnston, McIntosh, Okfuskee, Tulsa) in Oklahoma. We also provide locality information for specimens collected from four counties (Haskell, McCurtain, Muskogee, Wagoner) where river otters were described in published literature but no voucher specimens existed. During winter and spring 2006 and 2007, we visited 340 bridge sites in 28 watersheds in eastern and central Oklahoma and identified river otter signs in 16 counties where river otters were not previously documented in published literature or by voucher specimens. Proportion of positive sites within each watershed ranged 0–100%. Mail surveys suggested that river otters occurred in eight additional counties where they were not previously documented by published literature, voucher specimens, or sign-survey efforts.
McCoull, William; Addie, Matthew S; Birch, Alan M; Birtles, Susan; Buckett, Linda K; Butlin, Roger J; Bowker, Suzanne S; Boyd, Scott; Chapman, Stephen; Davies, Robert D M; Donald, Craig S; Green, Clive P; Jenner, Chloe; Kemmitt, Paul D; Leach, Andrew G; Moody, Graeme C; Gutierrez, Pablo Morentin; Newcombe, Nicholas J; Nowak, Thorsten; Packer, Martin J; Plowright, Alleyn T; Revill, John; Schofield, Paul; Sheldon, Chris; Stokes, Steve; Turnbull, Andrew V; Wang, Steven J Y; Whalley, David P; Wood, J Matthew
2012-06-15
A novel series of DGAT-1 inhibitors was discovered from an oxadiazole amide high throughput screening (HTS) hit. Optimisation of potency and ligand lipophilicity efficiency (LLE) resulted in a carboxylic acid containing clinical candidate 53 (AZD3988), which demonstrated excellent DGAT-1 potency (0.6 nM), good pharmacokinetics and pre-clinical in vivo efficacy that could be rationalised through a PK/PD relationship. Copyright © 2012 Elsevier Ltd. All rights reserved.
On some properties of bone functional adaptation phenomenon useful in mechanical design.
Nowak, Michał
2010-01-01
The paper discusses some unique properties of trabecular bone functional adaptation phenomenon, useful in mechanical design. On the basis of the biological process observations and the principle of constant strain energy density on the surface of the structure, the generic structural optimisation system has been developed. Such approach allows fulfilling mechanical theorem for the stiffest design, comprising the optimisations of size, shape and topology, using the concepts known from biomechanical studies. Also the biomimetic solution of multiple load problems is presented.
A big data approach for climate change indicators processing in the CLIP-C project
NASA Astrophysics Data System (ADS)
D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni
2016-04-01
Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.
Tanlock loop noise reduction using an optimised phase detector
NASA Astrophysics Data System (ADS)
Al-kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh
2013-06-01
This article proposes a time-delay digital tanlock loop (TDTL), which uses a new phase detector (PD) design that is optimised for noise reduction making it amenable for applications that require wide lock range without sacrificing the level of noise immunity. The proposed system uses an improved phase detector design which uses two phase detectors; one PD is used to optimise the noise immunity whilst the other is used to control the acquisition time of the TDTL system. Using the modified phase detector it is possible to reduce the second- and higher-order harmonics by at least 50% compared with the conventional TDTL system. The proposed system was simulated and tested using MATLAB/Simulink using frequency step inputs and inputs corrupted with varying levels of harmonic distortion. A hardware prototype of the system was implemented using a field programmable gate array (FPGA). The practical and simulation results indicate considerable improvement in the noise performance of the proposed system over the conventional TDTL architecture.
Consideration of plant behaviour in optimal servo-compensator design
NASA Astrophysics Data System (ADS)
Moase, W. H.; Manzie, C.
2016-07-01
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.
3D interlock design 100% PVDF piezoelectric to improve energy harvesting
NASA Astrophysics Data System (ADS)
Talbourdet, Anaëlle; Rault, François; Lemort, Guillaume; Cochrane, Cédric; Devaux, Eric; Campagne, Christine
2018-07-01
Piezoelectric textile structures based on 100% poly(vinylidene fluoride) (PVDF) were developed and characterised. Multifilaments of 246 tex were produced by melt spinning. The mechanical stretching during the process provides PVDF fibres with a piezoelectric β-phase of up to 97% has been measured by FTIR experiments. Several studies have been carried out on piezoelectric PVDF-based flexible structures (films or textiles), the aim of the study being the investigation of the differences between 2D and 3D woven fabrics from 100% optimised (by optimising piezoelectric crystalline phase) piezoelectric PVDF multifilament yarns. The textile structures were poled after the weaving process, and a maximum output voltage of 2.3 V was observed on 3D woven under compression by DMA tests. Energy harvesting is optimised in a 3D interlock thanks to the stresses of the multifilaments in the thickness. The addition of a resistor makes it possible to measure energy of 10.5 μJ.m‑2 during 10 cycles of stress in compression of 5 s each.
NASA Astrophysics Data System (ADS)
Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.
2018-04-01
Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.
Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line
NASA Astrophysics Data System (ADS)
Timings, Julian P.; Cole, David J.
2012-06-01
A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.
Optimised mounting conditions for poly (ether sulfone) in radiation detection.
Nakamura, Hidehito; Shirakawa, Yoshiyuki; Sato, Nobuhiro; Yamada, Tatsuya; Kitamura, Hisashi; Takahashi, Sentaro
2014-09-01
Poly (ether sulfone) (PES) is a candidate for use as a scintillation material in radiation detection. Its characteristics, such as its emission spectrum and its effective refractive index (based on the emission spectrum), directly affect the propagation of light generated to external photodetectors. It is also important to examine the presence of background radiation sources in manufactured PES. Here, we optimise the optical coupling and surface treatment of the PES, and characterise its background. Optical grease was used to enhance the optical coupling between the PES and the photodetector; absorption by the grease of short-wavelength light emitted from PES was negligible. Diffuse reflection induced by surface roughening increased the light yield for PES, despite the high effective refractive index. Background radiation derived from the PES sample and its impurities was negligible above the ambient, natural level. Overall, these results serve to optimise the mounting conditions for PES in radiation detection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
Semantic distance as a critical factor in icon design for in-car infotainment systems.
Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P
2017-11-01
In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fabrication and optimisation of a fused filament 3D-printed microfluidic platform
NASA Astrophysics Data System (ADS)
Tothill, A. M.; Partridge, M.; James, S. W.; Tatam, R. P.
2017-03-01
A 3D-printed microfluidic device was designed and manufactured using a low cost (2000) consumer grade fusion deposition modelling (FDM) 3D printer. FDM printers are not typically used, or are capable, of producing the fine detailed structures required for microfluidic fabrication. However, in this work, the optical transparency of the device was improved through manufacture optimisation to such a point that optical colorimetric assays can be performed in a 50 µl device. A colorimetric enzymatic cascade assay was optimised using glucose oxidase and horseradish peroxidase for the oxidative coupling of aminoantipyrine and chromotropic acid to produce a blue quinoneimine dye with a broad absorbance peaking at 590 nm for the quantification of glucose in solution. For comparison the assay was run in standard 96 well plates with a commercial plate reader. The results show the accurate and reproducible quantification of 0-10 mM glucose solution using a 3D-printed microfluidic optical device with performance comparable to that of a plate reader assay.
Dalvadi, Hitesh; Patel, Nikita; Parmar, Komal
2017-05-01
The aim of present investigation is to improve dissolution rate of poor soluble drug Zotepine by a self-microemulsifying drug delivery system (SMEDDS). Ternary phase diagram with oil (Oleic acid), surfactant (Tween 80) and co-surfactant (PEG 400) at apex were used to identify the efficient self-microemulsifying region. Box-Behnken design was implemented to study the influence of independent variables. Principal Component Analysis was used for scrutinising critical variables. The liquid SMEDDS were characterised for macroscopic evaluation, % Transmission, emulsification time and in vitro drug release studies. Optimised formulation OL1 was converted in to S-SMEDDS by using Aerosil ® 200 as an adsorbent in the ratio of 3:1. The S-SMEDDS was characterised by SEM, DSC, globule size (152.1 nm), zeta-potential (-28.1 mV), % transmission study (98.75%), in vitro release (86.57%) at 30 min. The optimised solid SMEDDS formulation showed faster drug release properties as compared to conventional tablet of Zotepine.
Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M
2016-01-01
Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
NASA Astrophysics Data System (ADS)
Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.
2015-10-01
In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.