Big data – take advantage of the benefits
Do you collect huge amounts of data from sensors, machine logs, customer databases or remote platforms? And do you wonder how you could generate turnover through these “big data” and cater for ever-changing service requirements? With our intelligent analyses, you can use your data to obtain important information that will help you to identify and rectify faults early on while optimizing your processes. In four steps, we turn your big data into valuable smart data which we then use to develop smart services – working with you and on your behalf.
The progress achieved through digitization benefits all areas of industry, e.g. mechanical and plant engineering, production and manufacturing, medical technology or the automotive industry.
FOUR STEPS TO A SMART SERVICE
1. USE CASE
What needs to be optimized?
Working interactively with you and your experts, we come up with a use case for individual improvements. During an initial discussion, we determine which errors or faults need to be identified from a data perspective and where there is room for improvement. Our data scientists and your experts then hold further discussions during which they develop and enhance their examination and assessment of the data. The result of this important first step is the business value definition of the use case.
AT A GLANCE
- Business value definition
- Data identification
Does it work?
Our data scientists prepare your data specially for your use case. Our key area of expertise lies in developing an individual algorithm, e.g. from machine learning or cognitive intelligence. Patterns, irregularities and outliers can therefore be identified and previously unknown causal links are revealed. The basis for optimization is now in place.
AT A GLANCE
- Algorithm development
Your raw data ...
Structured sensor data from an industrial machine
Data from sensor measurements are usually collected in the form of time series. Because sensor data are often subject to interference and incorrect, they need to be preprocessed before they can be analyzed further. This includes for example finding and eliminating measurement and transmission errors. In order to pre-process your data optimally, we develop tailor-made solutions and use standardized approaches.
Semi-structured log data from a web server
Both machines and IT systems describe status changes by outputting discrete events in the form of log data. These are generally semi-structured and are difficult to analyze owing to the huge quantity of information and the frequency at which they are output.
Entry ticket to the USU ITSM Valuemation suite
80% of company-related data are unstructured. These include texts containing valuable hidden knowledge which is overlooked without intelligent text-analysis procedures. With the help of statistical and linguistic measures, you can obtain structured information from these data, which is useful during further analyses.
...become our analyses
Automatic measurement correction in the event of sensor faults
If sensors suffer temporary faults when taking a measurement (see blue curve in the defective section), this can often render the entire measurement useless. Because taking measurements can be a very time-consuming and costly process, these series of measurements should not be discarded if possible.
With the help of solutions developed by us, defective measurement intervals can be corrected (see blue and green curve in the defective section) and measurements can be adjusted for drift (see red curve).
Real-time identification of outliers in sensor data
Identifying and eliminating outliers during operation is extremely important, especially for real-time analysis systems. This helps to prevent errors caused by outliers being reproduced in subsequent processing steps and thus falsifying analysis results.
Unlike the classic cluster analysis method, our solution allows outliers in sensor data to be both identified and eliminated during operation.
Adaptive anomaly recognition in real time
Anomalies are irregularities in the measurement data from machines and therefore play a key role in identifying faults.
With the help of the anomaly recognition system developed by us, historical data are used to automatically identify various machine operating phases and to calculate phase-specific tolerance thresholds. The sensitivity of the anomaly recognition system can be set by the user on an individual basis. As a result, even slight deviations from normal machine behavior can be identified in real time while the machine is operating.
In addition to real-time anomaly recognition in sensor data, a long-term prediction of the development of relevant key variables over time plays a central role as well. Key variables could be, for example, a machine’s performance or oil quality. If the development of these key variables over time can be predicted with sufficient accuracy, spare parts can be ordered on time and service technicians can be scheduled for maintenance. This helps to prevent costly downtimes.
Prediction approaches based on a linear extrapolation of measured variables can lead to highly inaccurate predictions. The approaches developed by us are based on machine learning procedures. They offer improved predictions with additional information regarding the uncertainty of the prediction. See illustration.
Both complex machines and IT systems describe their status by outputting discrete events in the form of log data. These generally contain important information regarding the status of the systems and developments over time. Thanks to self-learning procedures, our solutions are able to structure log data and identify the most likely causes of the occurrences of sudden events. As a result, we reveal any relevant links.
Now the practical part!
Transformation to the Katana platform now takes place. In order to do this, our development experts migrate the prototype algorithm so that it can be used effectively and reliably with large quantities of data. The subsequent integration into your target environment allows permanent further development and optimization.
AT A GLANCE
4. SMART SERVICES
Hit the market with a new portfolio!
With this approach, your data-driven portfolio benefits from two smart services. “Predictive maintenance” is used to increase productivity through proactive recommendations, while peer group comparisons result in a performance-increasing consultancy approach that optimizes overall system effectiveness. Smart services ensure that you stay ahead of the competition.
AT A GLANCE
- Smart services
- Business value
- Market position
Analyze industrial data -
fast and easily
KatanaFlow is the graphical development environment for data science in the mechanical and plant engineering domain. The powerful, user-friendly web application supports your engineers in analyzing industrial data and delivers insights into information hidden within. Pre-defined modules from extensive machine learning and data mining libraries are provided, meaning you can quickly create data analysis workflows using drag and drop. KatanaFlow supports you specifically in data preprocessing, cleansing, and correlation as well as in creating time-series analyses, recognition of patterns and anomalies, and many other analysis tasks. The resulting knowledge paves the way for digital service creation and transforms your engineers into data scientists.
Our data analysis: clear, simple and time-saving
data science projects
by drag & drop
of analytical processes
Development of your
own modules with
Python and Spark
Jupyter notebooks for
interactive data analysis
Data science in the cloud?
KatanaFlow lets you move your data science to the cloud. You take care of your data, algorithms and processes and we look after the software and hardware needed for analyzing large quantities of data and for training machine learning algorithms. Your team devises analysis processes on a clear web interface – anytime, anywhere. Your engineers are no longer limited by the computing power of their workstation, as calculations are performed in our cloud. There is no need for installation, configuration and operation of your own servers, so you can focus on what really matters: your data and analyses.
Benefits of working in the cloud:
No workload on
your own hardware
Collaboration and central
storage of all analyses
Reuse of your
for your results
Heidelberg is a pioneer in the field of innovative service offerings relying on a big data analysis platform. Leading-edge, scalable technology and analysis options mean that service processes are improved and new service offerings are developed simultaneously. The priority here is identifying irregularities early on by evaluating a machine’s status data. The aim is to eliminate imminent faults through planned servicing measures before they disrupt the production process, i.e. to ensure maximum machine availability thanks to intelligently planned, proactive servicing. Predictive monitoring coupled with data-driven process optimization increases the efficiency of customers’ value creation chains.
Stay up to date – contact us and find out more about the exciting trend of digital transformation.
A cross section of our Katana team
BUSINESS UNIT MANAGER
SALES & MARKETING