Manufacturing in all sectors is increasingly becoming data driven. Part of this shift to data is the use of databases for parts tracking on a production line, rather than the more traditional PLC shift register approach.
With a database managing the movement of data, individual machines on the line can operate independently, processing multiple lots simultaneously in a controlled manner.
Using this process, when a new part is loaded to a machine, the machine reads the part information from the database. This information includes previous process values alongside the part’s status. The machine can then determine whether to process the part or, if it was previously deemed a failure, not process it. If the machine does process the part, it will write back new part data and an updated status once its operations are complete.
This approach delivers substantial productivity gains as machines can produce lots independently of other machines on the line.
However, problems can arise when you look to scale a parts tracking database.
One of our customers faced this precise issue. The customer, a manufacturer in the life sciences sector, had developed their own part tracking database to move data between a large number of machines. This database operated effectively, but new plans for the line involved the acquisition of 50+ new machines.
How could the customer verify their database was fit for this new production capacity? Would the database be able to perform at the scaled-up level?
The customer engaged us at SL Controls to develop a solution that would give them the answers they needed to confidently move forward with enhancing their production capacity. The solution needed to fully scale and test interactions with the database at the enhanced level, ahead of the arrival of the new machines.
What We Did
The SL Controls team quickly dismissed one potential solution – the setting up of 50+ PLCs to run the required tests. This approach was too costly, it would take up too much space, and it would take too long.
So, we started to investigate existing database loader tools that enable the fast bulk uploading of data. There are a number of solutions available on the market, many with impressive features. However, they all lacked flexibility, only allowing you to insert either fixed data or randomised data at predefined levels.
This rigidity made the existing database loaders unsuitable because of the complexity of our customer’s part tracking database. Foreign key constraints when inserting data was also an issue.
Instead, we needed a solution that would allow us more flexibility when constructing interactions.
We decided the best approach was to develop a customised database loader tool that could facilitate all the requirements of the project and emulate all critical machines on the line.
Our new and fully customised SL DB Loader tool replicated seven different machine variations. It could also trigger four different database operations.
Each machine emulation had four key features:
- A dedicated session to an Oracle database server
- A configurable period for database interactions, both inserts and queries
- A configurable number of rows that could be inserted into the predetermined tables
- The ability to log database interaction times for analysis
The SL DB Loader tool allowed our customer to emulate the production environment with the additional machines before any of those machines arrived at the facility.
This enabled their database analysers to identify performance gaps. They were then able to use this knowledge to create solutions and countermeasures to ensure the database would be available to all machines at the enhanced level of production.
As a result, the customer had the verification data required to proceed with the acquisition and introduction of the new machines.
The solution went a stage further, too, as they were also able to use the SL DB Loader tool to understand how future machine performance improvements would impact the database.