Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
SES 7.4: Data Analytics in Manufacturing and Services
Time:
Wednesday, 28/Jun/2017:
4:30pm - 5:50pm

Session Chair: Mika Lohtander
Location: Aula P (first floor)

Show help for 'Increase or decrease the abstract text size'
Presentations

86. Advanced use of data as an enabler for adaptive production control using mathematical optimization – an application of Industry 4.0 principles

Johan Vallhagen1,2, Torgny Almgren1,2, Karin Thörnblad1

1GKN Aerospace Engine Systems, Sweden; 2Chalmers University of Technology, 412 96 Gothenburg SWEDEN

For a long time, it has been a well known fact that variation leads to production inefficiency. The reduction of variations has therefore been one of the starting points for successful production control strategies such as Lean and variants of the Toyota Production System. Some businesses do not, however, lend itself easily to this type of standards and logic. An example is the production of jet engine components and aircraft components that often are produced in functional workshops. For different reasons, dedicated product flows are difficult to motivate and results in solutions where a large mix of low volume products have to share a limited amount of resources. This may create complex flows where the planning and control conditions are subjected to constant change.

This type of production logic therefore demands a more adaptive production control to reach high efficiency, something that previously has been hard to achieve due to the lack of the required data and computational methods. The use of modern industrial IT solutions enables real time access to large amounts of data, which creates new possibilities when these data are combined with modern data management and recent findings in optimization methodology.

GKN already uses advanced optimization algorithms to schedule individual production cells, for example in a heat treatment facility which is a shared resource for many products. The results in implemented shop areas have shown considerably improved throughput and shorter lead times. The experiences, so far, have also recognized that this type of production control could be used much more frequently and in other workshop areas, if the required planning data were made available more easily. This can make a big difference, especially in production cells that are resources that support many value streams, but also processes that have short cycle times. To make the scheduling even better and more reliable, more exact data is needed and not only assumptions and standard times. This can be accomplished by logging production data to get actual cycle times, availability, quality yield etc. for each product and operation.

The paper describes how the required information infrastructure has been designed and how it can be combined with this novel type of optimized adaptive planning to achieve a significant improvement in production efficiency. The solution is a based on a system architecture and information infrastructure with a middleware that provides a specific production cell with all the relevant planning data. These data include information about where the products are in the production cell as well as the rest of the flow; the status of these products, and its influence on the content and characteristics of the upcoming processes; and the condition of the production equipment. The work reported in this paper are results from two research projects; an EU project under the H2020 and Swedish national funding.


378. How to support storage process in dismantling facility with IT solutions? – case study

Izabela Kudelska, Monika Kosacka, Karolina Werner- Lewandowska

Poznan Univeristy of Technology, Poland

Warehousing becomes one of the most important process carried out in the disassembling facility. Legal requirements, company’s limitations in terms of resources (space, people, money) and a variety of storage parts (different size, lack of standards), cause many problems in the warehouse process. Efficiency of warehousing becomes increasingly important due to the the need to find new opportunities for growth of competitive advantage for dismantling station. The aim of the work is to develop the concept of an IT tool supporting decision-making process related to the parts allocation in the warehouse on the example of selected disassembly station in Poland. Classification of parts and their proper storage will contribute benefits in the context of implementing the concept of sustainable development in practice. Article has got a demonstrative-concept character with elements of a case study.


358. BigBench workload executed by using Apache Flink

Sonia Bergamaschi, Luca Gagliardelli, Giovanni Simonini, Song Zhu

Università di Modena e Reggio Emilia, Italy

Many of the challenges that have to be faced in Industry 4.0 involve the management and analysis of huge amount of data (e.g. sensor data management and machine-fault prediction in industrial manufacturing, web-logs analysis in e-commerce). To handle the so-called Big Data management and analysis, a plethora of frameworks has been proposed in the last decade. Many of them are focusing on the parallel processing paradigm, such as MapReduce, Apache Hive, Apache Flink. However, in this jungle of frameworks, the performance evaluation of these technologies is not a trivial task, and strictly depends on the application requirements. The scope of this paper is to compare two of the most employed and promising frameworks to manage big data: Apache Flink and Apache Hive, which are general purpose distributed platforms under the umbrella of the Apache Software Foundation. To evaluate these two frameworks we use the benchmark BigBench, developed for Apache Hive. We re-implemented the most significant queries of Apache Hive BigBench to make them work on Apache Flink, in order to be able to compare the results of the same queries executed on both frameworks. Our results show that Apache Flink, if it is configured well, is able to outperform Apache Hive.


32. Smart Data Hub: Retrofit solution to acquire process-inherent knowledge

Dennis Cüneyt Bakir, Tobias Feickert, Robin Bakir

Innovator_Institut, Germany

Within the full paper, we will learn how a profound understanding of complex (and up to now) not assessable data ensures more resource-efficient production processes. Focal point is the description of the development of the now existing retrofit-solution, named Smart Data Hub (SDA). This industry integration devices serves as easy to use enabler for smart production, even in overaged production systems. Furthermore, it is quite handy and able to communicate with almost every existing sensor through a unique addressing option.

The validation is based on an industry-driven problem regarding blow mould production of plastic goods. So far existing and comparable solutions imply a cost relationship of about 1:750 and delivery a sample rate of 1:0.05. So, the SDA is more cost-efficient, delivers a higher accuracy and sample frequency and does not imply a tremendous IT-architecture, which makes it predestined as industry 4.0 enabler for SME.

The SDA, which serves as an add-on device bound to or inserted into the mould-tool itself, was first used in the terms of assessing data from of the specific internal conditions of closed overpressure mould processes. So, corresponding to each product, the ideal temperature and pressure conditions, in form of specific recipe, were conducted. Overlong cycle times of the production process itself were trimmed by more than 17,5%, without a loss in terms of product quality.



 
Contact and Legal Notice · Contact Address:
Conference: FAIM 2017
Conference Software - ConfTool Pro 2.6.110
© 2001 - 2017 by H. Weinreich, Hamburg, Germany