By Jamie Maxwell

Many organisations know they have too much manual work. Staff rekey the same information into multiple systems. Reports arrive late because someone still has to export, clean and combine spreadsheets. Project teams chase updates across email, CRM records, finance platforms and bespoke tools. The natural response is to look for more automation.

That instinct is usually right, but the first problem is often not the lack of automation. It is the lack of a reliable handover between systems. If data moves inconsistently, arrives late, uses different identifiers or needs manual correction every time it changes hands, more workflows will not create control. They will simply automate confusion faster.

This is where many integration programmes drift off course. Teams talk about bots, low-code workflows, dashboards or AI assistants before they have agreed the basics: where the source record lives, how records match across systems, what should sync, when it should sync, and how errors are detected and resolved.

In practice, the best first phase is usually less glamorous and far more valuable. Fix the data handover. Once that is stable, automation becomes easier to scale, easier to trust and much less expensive to maintain.

The problem is rarely the workflow on its own

When an organisation says a process is manual, the visible pain is usually at the workflow level. A person copies information from one system to another. A team waits for an export before they can report on performance. A manager cannot see project status without asking several people for updates. Those are real issues, but they are often symptoms rather than the root cause.

The deeper problem is that the systems involved do not share a dependable operational picture. One platform may hold customer data, another may hold financial data, and another may hold delivery activity. If those platforms are not joined up properly, every downstream process becomes fragile. Automating a fragile process does not make it robust. It just makes the failure harder to see until it matters.

That is why point-to-point integration work should not be treated as a narrow technical task. It is an operational design problem. The real question is not only whether system A can send data to system B. It is whether the organisation can depend on that handover as part of normal daily work.

What a dependable data handover actually looks like

A good handover does not require a huge enterprise architecture programme. It does require clarity. In most cases, the important elements are straightforward:

  • a clear source of truth for each important business object
  • stable identifiers that let records match across systems
  • defined rules for when data is created, updated, ignored or rejected
  • visibility into failures, delays and exceptions
  • a simple route for reconciliation when the systems disagree

Without those controls, teams end up compensating manually. They build side spreadsheets. They keep unofficial copies. They stop trusting dashboards. They ask staff to double-check records that should already be reliable. Over time, the cost of that uncertainty grows faster than the cost of the original missing integration.

This is also why the phrase "operational data layer" matters. It does not have to mean a giant warehouse before anything useful can happen. It means there is a deliberate layer where important business data is cleaned up, matched, validated and made dependable enough for reporting, workflows and automation to consume safely.

Why organisations often automate the wrong step first

Teams under pressure often start with the most visible repetitive task. They automate an export. They trigger an email. They push a record from one application into another. That can create a quick win, but it does not always fix the process.

The risk is that the automation is built around today's workaround rather than tomorrow's operating model. Instead of asking what the clean data handover should be, the team automates the fragile bridge they already distrust. That usually leads to one of four outcomes:

  • the automation works, but only if the upstream data is already perfect
  • exceptions pile up and users quietly return to manual checking
  • reporting and workflow logic drift apart because each integration handles rules differently
  • nobody is fully sure which system should be corrected when something goes wrong

Once that happens, every new automation becomes slower to deliver. Each one needs bespoke mapping, more conditional logic and more manual fallback. The estate grows, but confidence does not.

Why this matters even more now

The pressure to automate is only increasing. Organisations want faster reporting, better customer journeys, joined-up service delivery and practical uses of AI. All of those goals depend on the same thing: trustworthy underlying data movement.

A dashboard is only as useful as the data pipeline feeding it. A Power Apps workflow is only as good as the records it depends on. An AI assistant connected to fragmented or stale operational data can sound impressive while still giving the wrong answer.

That is why the first serious conversation should be about handover quality, not just automation volume. If the organisation cannot trace how core information moves between systems, it is not ready to add layers of intelligence on top.

What this looks like in real delivery

We see this pattern clearly in the work behind Gemstone's data, integration and automation projects. Anthesis needed a management information and reporting platform that pulled together data from multiple business systems under a demanding timeline. The useful outcome was not simply another report. It was a proper data warehouse built with Azure Synapse Analytics, Azure Data Factory pipelines and NetSuite data via SuiteAnalytics Connect, so reporting could run from a reliable joined-up source instead of scattered operational fragments.

That same theme appears in a different form in Gemstone's Anthesis Power Platform case study. The challenge there was not only to give users a new interface. It was to create a centralised project management console with real-time visibility and integration into existing business systems, so teams were not trapped in duplicate entry and inconsistent regional processes. In both cases, the value came from making cross-system data dependable enough for people to work from, not from automating isolated steps in the dark.

Those are useful examples because they show two related truths. First, reporting and operational workflow problems usually share the same integration foundations. Second, once those foundations are in place, the organisation can add better visibility, better automation and better decision support without rebuilding the logic every time.

Point-to-point integration still has a place

None of this means every organisation needs a heavyweight platform before improving anything. Point-to-point integrations are often the right answer, especially when the process is well-bounded and the systems involved are clear. The mistake is not using point-to-point integration. The mistake is treating it as a pure transport problem rather than an operational one.

A sensible point-to-point phase should still answer a few hard questions up front:

  • which system owns the master record after the integration is live
  • how related records are matched when names, references or statuses differ
  • what happens when one system accepts a change and the other one does not
  • how users can see and correct exceptions without engineering support
  • whether the same logic will need to support reporting, workflow and customer-facing journeys later

If those points are resolved, a point-to-point integration can be fast, pragmatic and durable. If they are skipped, the integration may still launch quickly, but it tends to become a maintenance problem disguised as progress.

Signs you need a data-layer first phase

Organisations usually already know when the underlying handover is weak. The symptoms are familiar:

  • different teams quote different numbers for the same metric
  • records exist in more than one system with no stable linking key
  • users export data into spreadsheets before they can trust it
  • workflow tools rely on manual checks before they can safely continue
  • support teams spend time reconciling records instead of resolving actual service issues
  • new automation requests keep discovering the same upstream data problem

When those conditions exist, a first-phase operational data layer is usually the better investment. That might mean a reporting store, a lightweight integration layer, stronger transformation rules, or a reconciled model shared across downstream tools. The exact technology can vary. The principle stays the same.

A practical sequence that reduces risk

For mid-sized organisations, the most effective delivery sequence is often more modest than people expect.

  1. Identify the business objects that matter most, such as customers, projects, orders, invoices or cases.
  2. Define the system of record for each one and document how records should match across the estate.
  3. Build the handover layer with validation, logging and exception handling rather than simple fire-and-forget transfers.
  4. Use that dependable flow to support reporting and operational visibility first.
  5. Add workflow automation once the underlying handoff is trusted.
  6. Only then expand into more advanced use cases such as AI copilots, predictive reporting or broader self-service tooling.

This approach creates a much better technical and operational footing. It also gives stakeholders something they can verify early. Instead of being asked to trust an automation black box, they can see whether the underlying numbers, statuses and records now line up properly across systems.

Why this improves cost as well as quality

There is also a commercial reason to work this way. Integration logic duplicated across multiple workflows is expensive. Every new automation has to repeat mappings, status rules and exception handling. Each change upstream ripples through several places. Testing gets slower because the same business rule exists in multiple tools.

By contrast, a dependable shared handover layer reduces repeated effort. Reporting, low-code workflows, customer portals and internal apps can all consume the same reconciled operational view. The organisation is not paying to rediscover its own data rules every time a new request appears.

That matters particularly for organisations using a mix of SaaS platforms, finance systems, CRM tools and bespoke applications. The more systems involved, the more valuable it becomes to define the handover once and reuse it well.

AI makes the discipline more important, not less

It is tempting to imagine that AI will smooth over inconsistent operations by summarising messy data or answering questions across disconnected systems. In reality, it usually exposes the gaps faster. If records are duplicated, statuses are ambiguous or updates arrive at different times in different tools, an AI layer has to guess which version is correct.

That is not a strong basis for service delivery, reporting or client communication. AI can add real value once operational data is clean enough to trust. Before that point, it often adds another interface on top of an unresolved integration problem.

The practical takeaway is simple: if AI is part of the roadmap, the quality of the data handover becomes even more important. The organisation needs to know what information is current, where it came from and which record should win when systems disagree.

Conclusion

Automation is still the right goal for many organisations. The mistake is assuming the first step is always another workflow. In a lot of estates, the real first phase is to make the movement of operational data dependable enough that people, reports and systems can all work from the same picture.

Once that handover is stable, automation stops being a patch for broken coordination and starts becoming what it should be: a reliable way to remove manual work, speed up delivery and improve visibility across the business.