4 Comments

Hey Beekey,

Really interesting breakdown of your integration challenges. The fourth option does seem promising, but I think there are three key questions that could help crystallize the decision:

1. What's your gut feel on returning to a SaaS model? And if you did, would it look radically different from Eight One Books today? This could be a game-changer for the decision - if a SaaS pivot isn't likely or would need a complete redesign anyway, you could drop all that multi-firm and subscription complexity. That would make the "copy into one codebase" route (option 2) much more appealing.

2. What's your engineering bandwidth looking like, both for the integration and long-term maintenance? Two codebases (options 3 and 4) mean double the deployment coordination and maintenance overhead. If you're running lean on engineering resources, option 2 might be your friend despite its other trade-offs.

3. How intertwined are your Eight One Books frontend components with the backend? This could make or break option 4 - if they're tightly coupled, the refactoring effort might be substantial. Knowing this could quickly rule option 4 in or out.

Would love to hear your thoughts on these! They might help narrow down which path makes the most sense for your situation.

-- EDIT--

I can see why you're gravitating toward option 4 - it's a defensive choice that keeps both backend systems running while only integrating at the frontend component level.

Easy rollbacks, piece-by-piece integration, and existing systems stay intact.

Here's a thought - what about an even more conservative Option 5?

Instead of integrating codebases, what if you built a thin "view layer" application that just aggregates data from both systems? Think of it as a lightweight dashboard that:

- Makes API calls to both existing systems

- Shows combined analytics in one place

- Keeps both original systems completely untouched

- Requires minimal new code

- Could start as a simple static site

It's the "don't fix what isn't broken" approach taken to the extreme. You'd only be adding a thin presentation layer on top, with zero risk to your existing systems. Even simpler than dealing with git submodules and unused code.

Would be curious to hear your thoughts on this ultra-conservative approach!

Expand full comment

Thanks for all these thoughts! I appreciate the help with thinking about this problem. I will say I am leaning towards option 2 now. My reasoning is at the end.

Addressing your points:

1 - If we return to the SaaS model, it will almost certainly not look like what it does today. If it did look like it does today, it would be because we're so successful that other firms realize they have to use our tools to succeed... but we'd still have a firm that's competing with them so there would be hesitation. I also think habits are harder to change than just having a competing firm grow faster. If we returned to a SaaS model, the likely thing it would look like would be a "firm in a box" setup. It would be more than tools, it would be everything needed to start and run an accounting firm. Neither system looks like that so it would require a big refactor regardless of the approach.

2 - Engineering bandwith is... me. Part time me since I also help with marketing and managing staff, both of which help me with the product side of things... which is also me. Options 3 and 4 would be 2 codebases, but they'd all be available by checking out one repo. Git submodules are a little clunky, but it should allow me to keep the maintenance overhead minimal. The part that helps with this is that I use all the same patterns. The problem with that is there are a lot of modules that have naming conflicts.

3 - Not intertwined much. I keep all my code pretty modular. The integration between frontend and backend are a set of API files that manage the endpoint called, the parameters for the call, and parsing the response. There's a couple bytestream endpoints, but those should still be easy to transfer.

The problem with option 5 is that there is a lot of data entry in eight one books. The goal of this was to give our staff one application to enter data and look at analytics. The client wouldn't be able to be thin if I wanted to achieve that goal. It also shares some big problems with option 4.

Defensive is a really good way to put option 4. That reflects my feelings about this perfectly. The reason I'm leaning towards option 2 is because being defensive leaves the entire system fragile.

Both systems have the same users, but they have different user ids representing them.

Both systems have the same clients, but they have different client ids.

Both systems have the same (financial) account objects, but different account ids.

To maintain both systems, I would need to actively duplicate all three pieces of data and have one system contain a mapping of the ids. The initial implementation of this would be pretty simple, but the production maintenance is likely a disaster especially since access control is so critical. Example:

User 234 having access to clients 23 and 47 in eight one books would be User 17 having access to clients 56 and 72 in the new platform.

The testing for that would have to be extremely robust, which to be fair is a solveable problem. The logging would be a complete mess, which is a much harder problem to solve. I feel like I'd need to build my own bespoke logging app to manage that.

If I think about what I would do if I were to start all over again, it would be one system. It's more work and testing now, but at least that work is predictable. The production issues from a fragile system would be less predictable and likely occur at the most inconvenient times. I've been pretty successful at not needing to wake up at 2am for any issues and I'd like to keep it that way.

Expand full comment

Your point about ID mappings really resonated - I'm currently in the middle of migrating a quotation processing system from PHP/MySQL to Python/Postgres, and managing those ID relationships across systems is exactly the kind of hidden complexity that can bite you.

Though in my case it's a one-off mapping effort (thankfully!), unlike your option 4 where you'd need to maintain it continuously.

Really appreciated your analysis about the "2am issues" - that's often the true cost of maintaining parallel systems with synchronized data. The upfront work of option 2 might be substantial, but at least it's predictable work you can do on your own schedule.

Would love to compare notes on system migrations and architectural decisions sometime - mind if I DM you? Always enjoy these kinds of technical discussions with other engineers who've been in the trenches.

Expand full comment

Yeah, option 2 feels bad at first. It looked better when I realized the mapping would be a one-off rather than perpetual.

Feel free to DM! I too enjoy technical discussions.

Expand full comment