Sponsored by

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads

The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

There is a pattern happening right now in small businesses everywhere.

The owner discovers AI. The owner signs up for several tools. The owner uses them inconsistently for a few weeks. The owner eventually stops tracking whether any of it is working. Tools renew automatically. The cycle continues.

By the time most people stop to ask "is this actually doing anything?", they have been paying for tools that are not delivering for months. Sometimes years.

Today we are going to stop that cycle.

I am going to walk you through a four-part AI audit. It is designed to take about two hours, it requires no technical background, and by the time you are done, you will know exactly which tools in your stack are earning their place and which ones are expensive hobbies.

Why Most AI Stacks Underperform

Before we get into the audit, let us understand why this problem is so common.

The first reason is what I call the novelty trap. When a new AI tool comes out, people sign up during the excitement phase. The tool is interesting. The demos are impressive. But interesting and impressive do not equal useful. A tool that is fun to play with is not the same as a tool that is doing real work in your business.

The second reason is the vague value problem. Most AI tools promise to "save you time" or "boost productivity." Those claims are almost impossible to measure, which means they are also almost impossible to disprove. The tool does not obviously fail, it just quietly underperforms, and you never realize it because you never set a standard for what success looked like.

The third reason is the effort mismatch. Many AI tools require meaningful setup and ongoing refinement to deliver real value. But most people plug them in, run them once or twice, and then treat the output as the ceiling. The tool is capable of much more, but nobody pushed it there.

The audit fixes all three of these. It forces you to measure, compare, and decide with actual evidence instead of gut feelings and sunk cost reasoning.

Part One: The Inventory

The first step is simply knowing what you have.

Open a spreadsheet and list every AI or automation tool you are currently paying for. Include the tool name, the monthly cost, the primary use case, and roughly how often you actually use it. Be honest about that last one. There is a difference between "I use this every day" and "I used this twice in the last month."

Most people are surprised by what shows up when they do this exercise. They find tools they signed up for and forgot about. They find tools they use for tasks that could be done by another tool they already own. They find overlaps and redundancies that, added together, represent a meaningful monthly expense.

If you want to go deeper on any specific tool in your stack, Fathom is worth adding here if you are not already using it. It records and summarizes your meetings automatically and connects directly to your CRM. It is one of the few AI tools that delivers obvious, measurable value from day one, and it is free to start.

Once your inventory is complete, total up the monthly cost. Write that number at the top of your sheet. That is your baseline. Everything in the audit is about determining whether that number is justified.

Part Two: The Output Audit

This is where most people realize the problem.

For each tool in your inventory, ask one simple question: what did this tool produce for my business in the last 30 days?

Not what it could produce. Not what it was designed for. What did it actually produce?

You are looking for three types of output.

Time saved. Did this tool replace a task that used to take you or your team time? How much time? Be specific. "It saves me time" is not an answer. "It reduced the time I spend on weekly client reports from 90 minutes to 20 minutes" is an answer.

Revenue influenced. Did this tool contribute to a deal getting closed, a lead getting followed up on faster, a client getting retained, or a product getting shipped? Even indirect contributions count, but you need to be able to name them.

Cost avoided. Did this tool replace something you would otherwise have paid for? A contractor, a subscription to another tool, an agency fee?

For each tool, try to put a dollar value on the output. Even a rough estimate is useful. If you genuinely cannot name a dollar figure or a concrete time saving, that is information. It means the tool has not been integrated into your workflow in a meaningful way, or it is not delivering what you expected.

A tool that costs $50 a month and saves you two hours a week is a great investment. A tool that costs $50 a month and you are not sure what it does is a subscription waiting to be canceled.

Part Three: The Integration Check

A tool that works in isolation is useful. A tool that works inside your existing workflow is powerful.

For each tool that passed the output audit, meaning it is delivering real, measurable value, now ask: how well does it connect to the rest of your operation?

Here is a practical checklist.

Does the output of this tool automatically flow into another tool, or does someone have to move it manually? Manual transfers are a point of friction. They also introduce errors and delays. If you are still copying and pasting output from your AI tools into your CRM, your spreadsheets, or your email platform, you have an integration gap.

Does this tool have a native integration with your other core tools, or would you need a connector like Make.com? Make can bridge most integration gaps, but the point is to know where the gaps are.

Is anyone on your team actually using this tool, or is it just you? A tool that only one person uses is a single point of failure. A tool that is embedded in your team's daily workflow is infrastructure.

Rate each tool on integration: fully integrated, partially integrated, or standalone. Fully integrated tools are your best assets. Standalone tools are your next project, either integrate them or replace them with something that connects better.

Part Four: The Decision Matrix

Now comes the actual work of the audit. For each tool in your stack, you are going to make one of four decisions.

Keep and optimize. The tool is delivering value and integrating well. Your job is to push it further. What else could it do? Are you using all of the features that are relevant to your business? Have you trained it on your specific context, tone, or data?

Keep and integrate. The tool is delivering value but sitting in isolation. Your job is to connect it to your workflow. Usually this means spending an afternoon in Make.com building the bridge.

Replace. The tool is not delivering meaningful value, but you need the capability it was supposed to provide. Research an alternative and make the switch. Do not let inertia keep you paying for something that does not work.

Cancel. The tool is not delivering value and you do not need the capability badly enough to pay for a better option. Cancel the subscription and reallocate that budget to something that earns it.

Go through every tool in your inventory and assign it a decision. Do not leave anything in the "I will figure this out later" category. That column is where money disappears.

What a Good AI Stack Looks Like

There is no universal answer for how many AI tools you should have. The right number is the number that is all delivering clear, measurable value.

For most small businesses, that ends up being somewhere between four and eight tools. A writing and content tool. An automation and workflow tool. A meeting and communication tool. A CRM or relationship tool. Possibly a specialized tool or two for your specific industry or use case.

What you are trying to avoid is the sprawling stack of 15 tools where seven of them are doing the same thing in slightly different ways, three of them are barely used, and nobody is sure which one to use for any given task.

The best stacks are not the biggest ones. They are the most integrated ones.

Every tool talks to every other tool. Data flows automatically. The whole system runs in the background while you focus on the work that actually requires a human.

That is the standard. Run the audit and measure yourself against it.

Your Next Step

Set aside two hours this week and run this audit. Go through your inventory, assess the output, check the integrations, and make the four decisions.

If what you find is that your stack has gaps, things that should be automated but are not, or capabilities you need but do not have, that is exactly what the AI Workflow Blueprint is designed to address. It gives you a clear framework for building a stack that is integrated, measurable, and built around your actual business model.

Reply BLUEPRINT and I will send you the details.

Quick Audit Reference

Here is the full four-part audit in a single reference so you can run it without scrolling back through this article.

Part 1: Inventory. List every tool, cost, use case, and actual usage frequency.

Part 2: Output Audit. For each tool, name the time saved, revenue influenced, or cost avoided in the last 30 days.

Part 3: Integration Check. Rate each tool as fully integrated, partially integrated, or standalone.

Part 4: Decision Matrix. Assign each tool a decision: keep and optimize, keep and integrate, replace, or cancel.

Two hours. Real answers. No more paying for tools that are not earning their place.

A Final Note on Mindset

There is a mindset shift that happens for most business owners when they run this audit for the first time. They stop thinking about AI tools as individual subscriptions and start thinking about their AI stack as a system.

Systems have logic. Inputs and outputs. They can be evaluated, improved, and optimized. Individual subscriptions are just recurring costs. The audit is the move from the second way of thinking to the first.

Once you make that shift, you stop making impulsive tool decisions based on a good demo or a compelling ad. You start asking "where does this fit in my system?" and "what does it replace or improve?" Those are harder questions to answer in the moment, but they are the ones that lead to a stack that actually delivers.

The businesses that win with AI are not the ones using the most tools. They are the ones who have built the most coherent systems. The audit is how you close the gap between where you are and where that looks like.

Reply BLUEPRINT to get the AI Workflow Blueprint ($47) and start building today.

Jordan Hale  |  The AI Newsroom  |  ainewsroomdaily.com

Keep reading