Introduction
Connected workforce solutions are fantastic. If you work in a manufacturing industry with frontline workers and are looking to both empower your team and increase performance, it’s worth considering whether a system could support this journey. When implemented correctly, they can uncover previously unseen performance issues, drive engagement, remove reams of manual paper tracking, and increase collaboration between departments.
The only problem: most teams don’t nail the implementation. For numerous reasons, from lack of senior buy-in to having limited internal capacity, many of these system deployments leave a lot of opportunity on the table. Teams are left with an all-singing, all-dancing tool that isn’t delivering the value they expected while senior leadership try to justify the cost, leaving teams deflated rather than empowered.
So we decided to write this blog series: a collection of posts to support teams with nailing their rollouts and delivering on the engagementand returns that such systems offer. All based on our extensive experience in developing, implementing and embedding automated data capture solutions.
Over the next few months, we’ll be sharing fortnightly posts highlighting the most common mistakes we see and exactly what to focus on to avoid them. Whether you’re a team about to start your first rollout or a site that’s fully bought in and already reaping the rewards, we hope that this series will contain useful information for your factory to maximise the value of your implementation.
Let’s get started. Today’s post is an overview of the four main buckets of mistakes we repeatedly see when implementing a connected workforce solution. In subsequent posts, we’ll take the time to drill into each in more detail, covering the specifics that you want to nail to ensure a world-class deployment.
Mistake 1: Poor reference data set up
The single biggest area that teams consistently fall short on is setting up their reference data (such as product information, downtime codes, line configurations, and user groups). This takes care and discipline to get right, but pays itself back many times over in the long run. The adage of “rubbish in, rubbish out” couldn’t be more relevant; if you don’t have the right goalposts set up and clear ways to communicate what’s blocking you, how can you get an objective view of your true performance? This only becomes more critical when you start introducing advanced analytics or AI capabilities; to get the most out of these models, they require clear and unambiguous context and accurate information. Configuring your data correctly sets the foundation for analysing the system outputs and provides a common language for your team to discuss performance.
In today’s overview, let’s consider the top three things to nail when setting up products and downtime reasons, the two areas most often configured sub-optimally:
Products:
- Ensure all products are clearly grouped, and the groupings correspond to operational complexity (i.e: all products in a group should have the same product nature and be capable of running at the same rate). This is particularly important if you have a high product mix, in which case configuring your groups correctly unlocks one of the most valuable features of these systems: bulk rate reviews. Being able to review performance at the group (rather than SKU) level allows you to assess and adjust planned rates en masse based on real production data, keeping your rates accurate and ensuring good service to customers.
- Align targets with other sources of “master data” within your business, such as those kept by the planning and finance teams. There’s nothing more frustrating to a shift manager than having a system report their performance as all “green”, only to have their planning manager tell them they’re behind plan. This indicates a fundamental misalignment between the two functions, and can quite quickly lead to disillusionment with the system.
- Set true bottleneck speeds (also referred to as theoretical maximums). This might be the most contentious point, and is understandably the most emotive. While it is often a complex question to answer, the fundamental point it drills down to is “what is the fastest my line can actually run?”. If the target you set is lower than that (or even worse, if your team is frequently running lines faster than that, and seeing a performance score of over 100%), then your “efficiency” isn’t accurate and you’re hiding your true improvement opportunities.
Downtimes:
- Ensure your reasons codes are MECE (mutually exclusive, completely exhaustive). Your line ops already deal with a lot, and this system is meant to make their lives easier, not harder. By keeping your downtime coding MECE, you remove ambiguity when figuring out which reason to select, and allow them to focus on fixing the issue, rather than data entry.
- Avoid codes like “other” or generalised grouping categories. While these are implemented with good intentions, they quickly become catch-all reasons that are used too freely and dilute the usefulness of the data being reported. Instead, set up a good process for flagging when a reason is missing, and a frequent drumbeat to review and add any new codes (especially when you are in the early days of a deployment).
- Set up planned durations accurately and ensure they are applied to your efficiency metrics (i.e: make sure a line’s performance isn’t penalised due to having planned downtimes like changeovers, as this is something the line ops have no control over). This ties back to having the right data at the right level: planned downtime should be reviewed as something distinct to unplanned downtime, because the process and functions for reducing it differ.
Mistake 2: Underinvesting in automation
Let’s continue the focus on extracting reliable information, but now home in on the physical install itself. With many of these systems, there will be a sliding scale of automation ranging from totally manual entry (via some sort of input panel / tablet) to fully automated capture (via sensors or connections to existing machines). Where possible, it’s worth the time and cost involved to aim for the highest level of automation possible while still maintaining accuracy in the outputs reported.
This perhaps isn’t a surprising take: you get most benefit from an automated data capture system when you automate it! However, it’s surprisingly common to see teams either setting up sensors in locations that don’t lead to accurate product counts, or keeping processes that could be automated as manual entry points. To support this decision-making process, we typically try to work through the following questions:
- What is the highest level of automation we can feasibly implement at this location?
- Will this level of automation provide trustworthy datawhich will allow departments like planning and finance to use it for their reporting? If no, could this be fixed by either changing our sensor locations / PLC connections, or by adding a manual reconciliation at the end of a run?
- For the highest level of automation that provides accurate results, how can we make it as easy as possible for the shop floor team to use?
Working through the above questions will allow for a setup which maximises the usefulness of the system; one which saves your frontline team time in data entry, automatically tracks loss durations, and outputs reliable product counts which save the need for manual end-of-shift reports.
Mistake 3: Incomplete team engagement
Easy to get right if it’s a focus from day one, much more difficult to correct retrospectively. If the goal is full digital transformation (which hopefully it is!), then it’s essential to engage all levels and departments that will be involved well in advance of going live.
Too often, connected workforce solutions can be seen as something that only affects a few departments (typically ops, continuous improvement, perhaps engineering). However, this misses one of the main points of these systems: improved communication and engagement across functions. This is the *connected workforce* piece of the puzzle! You get the opportunity to not only automate your operational performance capture, you also get the chance to standardise your methods of messaging, running meetings, and setting / reviewing actions. At that point, you can’t afford to only engage one or two departments, because you immediately limit your opportunity for full site buy in. You don’t want to be left in a position where you are trying to run your daily meeting via this fantastic new tool that you’ve invested in, but only a handful of people in the room are familiar with and have access to it.
The point to nail here is clear: aim to engage all departments involved in operations, in your initial launch and rollout. This leads to shared ownership and avoids the risk of certain departments feeling like it “isn’t a system for them”.
With the above being said, there are absolutely valid reasons why different teams might have varied levels of involvement in a system rollout, which might be phased over time. In an upcoming post, we’ll dive into how to get the balance right to ensure that they’re still part of the journey from day one.
Mistake 4: Indecision around legacy processes
Adopting new processes is famously difficult. Especially when certain ways of working have existed for years. However, there is no more effective way to stunt the belief in something new than to continue doing things “the way they’ve always been done”. Automating data capture offers many benefits: one that is often touted is the opportunity to finally remove the endless paper sheets and walls of whiteboards which later may or may not be transferred into an Excel sheet for historic tracking. To achieve this, the decision must be made at some point to stop the old method, and trust that the new method will work.
There are many considerations when it comes to this transition moment; teams don’t want to feel like they have a period where they don’t have accurate performance data, so they need to feel genuine confidence that the new system is outputting reliable information. There’s also the acknowledgement that even the best systems come with a bedding-in period when users become proficient, the physical install is refined, and any setup quirks are ironed out. This often leads to the natural conclusion of a period where the two methods are run in parallel. This is the right thing to do, but it’s important to not fall into the following traps:
- Expecting the two to exactly align: one of the reasons for the new system is that it will give you information you didn’t have before. Take efficiency losses as an example; if you are comparing downtimes captured manually on paper to a digital record based on whether the line is running or not, you would rarely expect an operator’s timings to exactly match those of the system – they’ve got to focus on getting the line back up and running, so timing the stoppage is (rightly) a secondary concern at that moment. Retrospective tracking will never be as accurate. Your codes might also slightly differ between the two (especially if you’ve taken the implementation as an opportunity to refine and restructure them), driving further misalignment. Then there might be certain losses (like speed losses) that you didn’t even have visibility of until you put this system in. That’s one of the benefits! But it does mean the two won’t be the same.
- Running in parallel for too long: as a consequence of the above misalignment, some teams will be reticent to retire the old way and commit to the new one. However, the grace period with your team only lasts so long until momentum starts to wane and it feels like the new solution isn’t all it was promised to be. Running two methods in parallel works for a limited period because people accept that there needs to be confidence in the new data. But once that period expires, it just becomes double counting and even more work than before.
The most effective implementations are the product of teams who accept the above, set clear internal expectations of the directional agreement between methods required to transition fully to the new method, and are decisive in stopping the old method when those criteria are met. This is an area where indecision costs you a lot more than just time: it costs faith in the new way of working, and can lead to a loss in overall project momentum.
Today's wrap up
There you have it: a high-level overview of the four main mistakes we see teams make when implementing connected workforce solutions, and guidance on how to avoid them based on our experience here at Fighting Fish.
We’d love to hear your thoughts on the above: do you agree with the points made, might they be helpful to your team, have we missed any key ones? Please drop a comment on our LinkedIn page or reach out and we’d love to chat! If you have any particular areas you’d like to see a future post on, then let us know and we’ll try to cover it.
We’ll be back with the next post in two weeks, where we dig further into the first area reviewed in today’s overview: configuring your product list.
If you’d like to stay up to date with these posts, you can subscribe to our mailing list (we never send spam!) via the “Sign up to our newsletter” button below, or follow our LinkedIn page: https://www.linkedin.com/company/fighting-fish-ltd.

